00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4080 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3670 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.295 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.296 The recommended git tool is: git 00:00:00.296 using credential 00000000-0000-0000-0000-000000000002 00:00:00.298 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.319 Fetching changes from the remote Git repository 00:00:00.322 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.337 Using shallow fetch with depth 1 00:00:00.337 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.337 > git --version # timeout=10 00:00:00.355 > git --version # 'git version 2.39.2' 00:00:00.355 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.372 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.372 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.020 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.030 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.040 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.040 > git config core.sparsecheckout # timeout=10 00:00:08.049 > git read-tree -mu HEAD # timeout=10 00:00:08.064 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.087 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.087 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.201 [Pipeline] Start of Pipeline 00:00:08.214 [Pipeline] library 00:00:08.215 Loading library shm_lib@master 00:00:08.215 Library shm_lib@master is cached. Copying from home. 00:00:08.229 [Pipeline] node 00:00:08.242 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.243 [Pipeline] { 00:00:08.255 [Pipeline] catchError 00:00:08.257 [Pipeline] { 00:00:08.271 [Pipeline] wrap 00:00:08.278 [Pipeline] { 00:00:08.285 [Pipeline] stage 00:00:08.287 [Pipeline] { (Prologue) 00:00:08.305 [Pipeline] echo 00:00:08.307 Node: VM-host-SM38 00:00:08.313 [Pipeline] cleanWs 00:00:08.323 [WS-CLEANUP] Deleting project workspace... 00:00:08.323 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.329 [WS-CLEANUP] done 00:00:08.509 [Pipeline] setCustomBuildProperty 00:00:08.594 [Pipeline] httpRequest 00:00:09.146 [Pipeline] echo 00:00:09.148 Sorcerer 10.211.164.20 is alive 00:00:09.159 [Pipeline] retry 00:00:09.161 [Pipeline] { 00:00:09.176 [Pipeline] httpRequest 00:00:09.182 HttpMethod: GET 00:00:09.183 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.183 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.184 Response Code: HTTP/1.1 200 OK 00:00:09.185 Success: Status code 200 is in the accepted range: 200,404 00:00:09.185 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.342 [Pipeline] } 00:00:10.362 [Pipeline] // retry 00:00:10.367 [Pipeline] sh 00:00:10.696 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.713 [Pipeline] httpRequest 00:00:11.370 [Pipeline] echo 00:00:11.372 Sorcerer 10.211.164.20 is alive 00:00:11.380 [Pipeline] retry 00:00:11.381 [Pipeline] { 00:00:11.394 [Pipeline] httpRequest 00:00:11.399 HttpMethod: GET 00:00:11.399 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:11.400 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:11.426 Response Code: HTTP/1.1 200 OK 00:00:11.427 Success: Status code 200 is in the accepted range: 200,404 00:00:11.427 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:39.741 [Pipeline] } 00:01:39.758 [Pipeline] // retry 00:01:39.766 [Pipeline] sh 00:01:40.053 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:43.364 [Pipeline] sh 00:01:43.640 + git -C spdk log --oneline -n5 00:01:43.640 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:43.640 5592070b3 doc: update nvmf_tracing.md 00:01:43.640 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:43.640 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:43.640 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:01:43.659 [Pipeline] withCredentials 00:01:43.668 > git --version # timeout=10 00:01:43.681 > git --version # 'git version 2.39.2' 00:01:43.694 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:43.697 [Pipeline] { 00:01:43.705 [Pipeline] retry 00:01:43.707 [Pipeline] { 00:01:43.721 [Pipeline] sh 00:01:43.999 + git ls-remote http://dpdk.org/git/dpdk main 00:01:44.009 [Pipeline] } 00:01:44.027 [Pipeline] // retry 00:01:44.032 [Pipeline] } 00:01:44.049 [Pipeline] // withCredentials 00:01:44.059 [Pipeline] httpRequest 00:01:44.436 [Pipeline] echo 00:01:44.438 Sorcerer 10.211.164.20 is alive 00:01:44.448 [Pipeline] retry 00:01:44.451 [Pipeline] { 00:01:44.465 [Pipeline] httpRequest 00:01:44.470 HttpMethod: GET 00:01:44.470 URL: http://10.211.164.20/packages/dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:01:44.471 Sending request to url: http://10.211.164.20/packages/dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:01:44.472 Response Code: HTTP/1.1 200 OK 00:01:44.472 Success: Status code 200 is in the accepted range: 200,404 00:01:44.473 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:01:50.608 [Pipeline] } 00:01:50.624 [Pipeline] // retry 00:01:50.631 [Pipeline] sh 00:01:50.916 + tar --no-same-owner -xf dpdk_f86085caab0c6c5dc630b9d6ad20d1c728e7703e.tar.gz 00:01:52.311 [Pipeline] sh 00:01:52.593 + git -C dpdk log --oneline -n5 00:01:52.593 f86085caab app/testpmd: avoid potential outside of array reference 00:01:52.593 4c2e746842 app/testpmd: remove redundant policy action condition 00:01:52.593 357f915ef5 test/eal: fix lcore check 00:01:52.593 b3e64fe596 test/eal: fix loop coverage for alignment macros 00:01:52.593 c6f484adf1 test/crypto: fix TLS zero length record check 00:01:52.610 [Pipeline] writeFile 00:01:52.625 [Pipeline] sh 00:01:52.910 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:52.922 [Pipeline] sh 00:01:53.207 + cat autorun-spdk.conf 00:01:53.207 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.207 SPDK_TEST_NVME=1 00:01:53.207 SPDK_TEST_FTL=1 00:01:53.207 SPDK_TEST_ISAL=1 00:01:53.207 SPDK_RUN_ASAN=1 00:01:53.207 SPDK_RUN_UBSAN=1 00:01:53.207 SPDK_TEST_XNVME=1 00:01:53.207 SPDK_TEST_NVME_FDP=1 00:01:53.207 SPDK_TEST_NATIVE_DPDK=main 00:01:53.207 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:53.207 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.216 RUN_NIGHTLY=1 00:01:53.218 [Pipeline] } 00:01:53.231 [Pipeline] // stage 00:01:53.245 [Pipeline] stage 00:01:53.247 [Pipeline] { (Run VM) 00:01:53.259 [Pipeline] sh 00:01:53.543 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:53.543 + echo 'Start stage prepare_nvme.sh' 00:01:53.543 Start stage prepare_nvme.sh 00:01:53.543 + [[ -n 6 ]] 00:01:53.543 + disk_prefix=ex6 00:01:53.543 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:53.543 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:53.543 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:53.543 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.543 ++ SPDK_TEST_NVME=1 00:01:53.543 ++ SPDK_TEST_FTL=1 00:01:53.543 ++ SPDK_TEST_ISAL=1 00:01:53.543 ++ SPDK_RUN_ASAN=1 00:01:53.543 ++ SPDK_RUN_UBSAN=1 00:01:53.543 ++ SPDK_TEST_XNVME=1 00:01:53.543 ++ SPDK_TEST_NVME_FDP=1 00:01:53.543 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:53.543 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:53.544 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.544 ++ RUN_NIGHTLY=1 00:01:53.544 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:53.544 + nvme_files=() 00:01:53.544 + declare -A nvme_files 00:01:53.544 + backend_dir=/var/lib/libvirt/images/backends 00:01:53.544 + nvme_files['nvme.img']=5G 00:01:53.544 + nvme_files['nvme-cmb.img']=5G 00:01:53.544 + nvme_files['nvme-multi0.img']=4G 00:01:53.544 + nvme_files['nvme-multi1.img']=4G 00:01:53.544 + nvme_files['nvme-multi2.img']=4G 00:01:53.544 + nvme_files['nvme-openstack.img']=8G 00:01:53.544 + nvme_files['nvme-zns.img']=5G 00:01:53.544 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:53.544 + (( SPDK_TEST_FTL == 1 )) 00:01:53.544 + nvme_files["nvme-ftl.img"]=6G 00:01:53.544 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:53.544 + nvme_files["nvme-fdp.img"]=1G 00:01:53.544 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:53.544 + for nvme in "${!nvme_files[@]}" 00:01:53.544 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:53.544 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.544 + for nvme in "${!nvme_files[@]}" 00:01:53.544 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:53.544 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:53.544 + for nvme in "${!nvme_files[@]}" 00:01:53.544 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:53.805 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:53.805 + for nvme in "${!nvme_files[@]}" 00:01:53.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:53.805 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:53.805 + for nvme in "${!nvme_files[@]}" 00:01:53.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:53.805 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:53.805 + for nvme in "${!nvme_files[@]}" 00:01:53.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:53.805 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.805 + for nvme in "${!nvme_files[@]}" 00:01:53.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:53.805 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.805 + for nvme in "${!nvme_files[@]}" 00:01:53.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:53.805 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:53.805 + for nvme in "${!nvme_files[@]}" 00:01:53.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:54.378 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.378 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:54.378 + echo 'End stage prepare_nvme.sh' 00:01:54.378 End stage prepare_nvme.sh 00:01:54.391 [Pipeline] sh 00:01:54.676 + DISTRO=fedora39 00:01:54.676 + CPUS=10 00:01:54.676 + RAM=12288 00:01:54.676 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:54.676 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:54.676 00:01:54.676 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:54.676 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:54.676 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:54.676 HELP=0 00:01:54.676 DRY_RUN=0 00:01:54.677 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:54.677 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:54.677 NVME_AUTO_CREATE=0 00:01:54.677 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:54.677 NVME_CMB=,,,, 00:01:54.677 NVME_PMR=,,,, 00:01:54.677 NVME_ZNS=,,,, 00:01:54.677 NVME_MS=true,,,, 00:01:54.677 NVME_FDP=,,,on, 00:01:54.677 SPDK_VAGRANT_DISTRO=fedora39 00:01:54.677 SPDK_VAGRANT_VMCPU=10 00:01:54.677 SPDK_VAGRANT_VMRAM=12288 00:01:54.677 SPDK_VAGRANT_PROVIDER=libvirt 00:01:54.677 SPDK_VAGRANT_HTTP_PROXY= 00:01:54.677 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:54.677 SPDK_OPENSTACK_NETWORK=0 00:01:54.677 VAGRANT_PACKAGE_BOX=0 00:01:54.677 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:54.677 FORCE_DISTRO=true 00:01:54.677 VAGRANT_BOX_VERSION= 00:01:54.677 EXTRA_VAGRANTFILES= 00:01:54.677 NIC_MODEL=e1000 00:01:54.677 00:01:54.677 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:54.677 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:57.219 Bringing machine 'default' up with 'libvirt' provider... 00:01:57.480 ==> default: Creating image (snapshot of base box volume). 00:01:57.742 ==> default: Creating domain with the following settings... 00:01:57.742 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732661196_8ecf6168016189bf5b18 00:01:57.742 ==> default: -- Domain type: kvm 00:01:57.742 ==> default: -- Cpus: 10 00:01:57.742 ==> default: -- Feature: acpi 00:01:57.742 ==> default: -- Feature: apic 00:01:57.742 ==> default: -- Feature: pae 00:01:57.742 ==> default: -- Memory: 12288M 00:01:57.742 ==> default: -- Memory Backing: hugepages: 00:01:57.742 ==> default: -- Management MAC: 00:01:57.742 ==> default: -- Loader: 00:01:57.742 ==> default: -- Nvram: 00:01:57.742 ==> default: -- Base box: spdk/fedora39 00:01:57.742 ==> default: -- Storage pool: default 00:01:57.742 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732661196_8ecf6168016189bf5b18.img (20G) 00:01:57.742 ==> default: -- Volume Cache: default 00:01:57.742 ==> default: -- Kernel: 00:01:57.742 ==> default: -- Initrd: 00:01:57.742 ==> default: -- Graphics Type: vnc 00:01:57.742 ==> default: -- Graphics Port: -1 00:01:57.742 ==> default: -- Graphics IP: 127.0.0.1 00:01:57.742 ==> default: -- Graphics Password: Not defined 00:01:57.742 ==> default: -- Video Type: cirrus 00:01:57.742 ==> default: -- Video VRAM: 9216 00:01:57.742 ==> default: -- Sound Type: 00:01:57.742 ==> default: -- Keymap: en-us 00:01:57.742 ==> default: -- TPM Path: 00:01:57.742 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:57.742 ==> default: -- Command line args: 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:57.742 ==> default: -> value=-drive, 00:01:57.742 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:57.742 ==> default: -> value=-drive, 00:01:57.742 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:57.742 ==> default: -> value=-drive, 00:01:57.742 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.742 ==> default: -> value=-drive, 00:01:57.742 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.742 ==> default: -> value=-drive, 00:01:57.742 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:57.742 ==> default: -> value=-drive, 00:01:57.742 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:57.742 ==> default: -> value=-device, 00:01:57.742 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.743 ==> default: Creating shared folders metadata... 00:01:58.004 ==> default: Starting domain. 00:01:59.390 ==> default: Waiting for domain to get an IP address... 00:02:17.523 ==> default: Waiting for SSH to become available... 00:02:17.523 ==> default: Configuring and enabling network interfaces... 00:02:20.136 default: SSH address: 192.168.121.154:22 00:02:20.136 default: SSH username: vagrant 00:02:20.136 default: SSH auth method: private key 00:02:22.049 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:30.195 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:35.518 ==> default: Mounting SSHFS shared folder... 00:02:38.065 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:38.066 ==> default: Checking Mount.. 00:02:39.038 ==> default: Folder Successfully Mounted! 00:02:39.038 00:02:39.038 SUCCESS! 00:02:39.038 00:02:39.038 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:39.038 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:39.038 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:39.038 00:02:39.049 [Pipeline] } 00:02:39.064 [Pipeline] // stage 00:02:39.073 [Pipeline] dir 00:02:39.073 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:39.075 [Pipeline] { 00:02:39.087 [Pipeline] catchError 00:02:39.089 [Pipeline] { 00:02:39.102 [Pipeline] sh 00:02:39.386 + vagrant ssh-config --host vagrant 00:02:39.386 + sed -ne '/^Host/,$p' 00:02:39.386 + tee ssh_conf 00:02:42.688 Host vagrant 00:02:42.688 HostName 192.168.121.154 00:02:42.688 User vagrant 00:02:42.688 Port 22 00:02:42.688 UserKnownHostsFile /dev/null 00:02:42.688 StrictHostKeyChecking no 00:02:42.688 PasswordAuthentication no 00:02:42.688 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:42.688 IdentitiesOnly yes 00:02:42.688 LogLevel FATAL 00:02:42.688 ForwardAgent yes 00:02:42.688 ForwardX11 yes 00:02:42.688 00:02:42.704 [Pipeline] withEnv 00:02:42.706 [Pipeline] { 00:02:42.721 [Pipeline] sh 00:02:43.004 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:43.005 source /etc/os-release 00:02:43.005 [[ -e /image.version ]] && img=$(< /image.version) 00:02:43.005 # Minimal, systemd-like check. 00:02:43.005 if [[ -e /.dockerenv ]]; then 00:02:43.005 # Clear garbage from the node'\''s name: 00:02:43.005 # agt-er_autotest_547-896 -> autotest_547-896 00:02:43.005 # $HOSTNAME is the actual container id 00:02:43.005 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:43.005 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:43.005 # We can assume this is a mount from a host where container is running, 00:02:43.005 # so fetch its hostname to easily identify the target swarm worker. 00:02:43.005 container="$(< /etc/hostname) ($agent)" 00:02:43.005 else 00:02:43.005 # Fallback 00:02:43.005 container=$agent 00:02:43.005 fi 00:02:43.005 fi 00:02:43.005 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:43.005 ' 00:02:43.278 [Pipeline] } 00:02:43.294 [Pipeline] // withEnv 00:02:43.302 [Pipeline] setCustomBuildProperty 00:02:43.316 [Pipeline] stage 00:02:43.318 [Pipeline] { (Tests) 00:02:43.333 [Pipeline] sh 00:02:43.620 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:43.897 [Pipeline] sh 00:02:44.210 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:44.227 [Pipeline] timeout 00:02:44.227 Timeout set to expire in 50 min 00:02:44.229 [Pipeline] { 00:02:44.244 [Pipeline] sh 00:02:44.530 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:45.101 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:45.114 [Pipeline] sh 00:02:45.399 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:45.671 [Pipeline] sh 00:02:45.950 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:46.225 [Pipeline] sh 00:02:46.504 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:46.762 ++ readlink -f spdk_repo 00:02:46.762 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:46.762 + [[ -n /home/vagrant/spdk_repo ]] 00:02:46.762 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:46.762 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:46.762 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:46.762 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:46.762 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:46.762 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:46.762 + cd /home/vagrant/spdk_repo 00:02:46.762 + source /etc/os-release 00:02:46.762 ++ NAME='Fedora Linux' 00:02:46.762 ++ VERSION='39 (Cloud Edition)' 00:02:46.762 ++ ID=fedora 00:02:46.762 ++ VERSION_ID=39 00:02:46.762 ++ VERSION_CODENAME= 00:02:46.762 ++ PLATFORM_ID=platform:f39 00:02:46.762 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:46.762 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:46.762 ++ LOGO=fedora-logo-icon 00:02:46.762 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:46.762 ++ HOME_URL=https://fedoraproject.org/ 00:02:46.762 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:46.762 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:46.762 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:46.762 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:46.762 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:46.762 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:46.762 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:46.762 ++ SUPPORT_END=2024-11-12 00:02:46.762 ++ VARIANT='Cloud Edition' 00:02:46.762 ++ VARIANT_ID=cloud 00:02:46.762 + uname -a 00:02:46.762 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:46.762 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:47.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:47.277 Hugepages 00:02:47.277 node hugesize free / total 00:02:47.277 node0 1048576kB 0 / 0 00:02:47.277 node0 2048kB 0 / 0 00:02:47.277 00:02:47.277 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:47.277 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:47.277 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:47.277 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:47.277 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:47.277 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:47.277 + rm -f /tmp/spdk-ld-path 00:02:47.277 + source autorun-spdk.conf 00:02:47.277 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.278 ++ SPDK_TEST_NVME=1 00:02:47.278 ++ SPDK_TEST_FTL=1 00:02:47.278 ++ SPDK_TEST_ISAL=1 00:02:47.278 ++ SPDK_RUN_ASAN=1 00:02:47.278 ++ SPDK_RUN_UBSAN=1 00:02:47.278 ++ SPDK_TEST_XNVME=1 00:02:47.278 ++ SPDK_TEST_NVME_FDP=1 00:02:47.278 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:47.278 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:47.278 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:47.278 ++ RUN_NIGHTLY=1 00:02:47.278 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:47.278 + [[ -n '' ]] 00:02:47.278 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:47.278 + for M in /var/spdk/build-*-manifest.txt 00:02:47.278 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:47.278 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.537 + for M in /var/spdk/build-*-manifest.txt 00:02:47.537 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:47.537 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.537 + for M in /var/spdk/build-*-manifest.txt 00:02:47.537 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:47.537 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.537 ++ uname 00:02:47.537 + [[ Linux == \L\i\n\u\x ]] 00:02:47.537 + sudo dmesg -T 00:02:47.537 + sudo dmesg --clear 00:02:47.537 + dmesg_pid=5764 00:02:47.537 + [[ Fedora Linux == FreeBSD ]] 00:02:47.537 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.537 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.537 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:47.537 + sudo dmesg -Tw 00:02:47.537 + [[ -x /usr/src/fio-static/fio ]] 00:02:47.537 + export FIO_BIN=/usr/src/fio-static/fio 00:02:47.537 + FIO_BIN=/usr/src/fio-static/fio 00:02:47.537 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:47.537 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:47.537 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:47.537 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.537 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.537 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:47.537 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.537 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.537 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.537 22:47:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:47.537 22:47:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:47.537 22:47:26 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:47.537 22:47:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:47.537 22:47:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.537 22:47:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:47.537 22:47:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:47.537 22:47:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:47.537 22:47:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:47.537 22:47:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.537 22:47:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.537 22:47:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.537 22:47:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.537 22:47:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.537 22:47:26 -- paths/export.sh@5 -- $ export PATH 00:02:47.537 22:47:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.537 22:47:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:47.537 22:47:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:47.537 22:47:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732661246.XXXXXX 00:02:47.537 22:47:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732661246.R9sNRx 00:02:47.537 22:47:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:47.537 22:47:26 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:47.537 22:47:26 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:47.537 22:47:26 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:47.537 22:47:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:47.537 22:47:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:47.537 22:47:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:47.537 22:47:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:47.537 22:47:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.537 22:47:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:47.537 22:47:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:47.537 22:47:26 -- pm/common@17 -- $ local monitor 00:02:47.538 22:47:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.538 22:47:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.538 22:47:26 -- pm/common@25 -- $ sleep 1 00:02:47.538 22:47:26 -- pm/common@21 -- $ date +%s 00:02:47.538 22:47:26 -- pm/common@21 -- $ date +%s 00:02:47.796 22:47:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732661246 00:02:47.796 22:47:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732661246 00:02:47.796 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732661246_collect-vmstat.pm.log 00:02:47.796 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732661246_collect-cpu-load.pm.log 00:02:48.731 22:47:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:48.732 22:47:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:48.732 22:47:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:48.732 22:47:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:48.732 22:47:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:48.732 Tue Nov 26 10:47:27 PM UTC 2024 00:02:48.732 22:47:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:48.732 v25.01-pre-271-g2f2acf4eb 00:02:48.732 22:47:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:48.732 22:47:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:48.732 22:47:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:48.732 22:47:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:48.732 22:47:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.732 ************************************ 00:02:48.732 START TEST asan 00:02:48.732 ************************************ 00:02:48.732 using asan 00:02:48.732 22:47:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:48.732 00:02:48.732 real 0m0.000s 00:02:48.732 user 0m0.000s 00:02:48.732 sys 0m0.000s 00:02:48.732 ************************************ 00:02:48.732 END TEST asan 00:02:48.732 ************************************ 00:02:48.732 22:47:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:48.732 22:47:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:48.732 22:47:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:48.732 22:47:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:48.732 22:47:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:48.732 22:47:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:48.732 22:47:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.732 ************************************ 00:02:48.732 START TEST ubsan 00:02:48.732 ************************************ 00:02:48.732 using ubsan 00:02:48.732 22:47:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:48.732 00:02:48.732 real 0m0.000s 00:02:48.732 user 0m0.000s 00:02:48.732 sys 0m0.000s 00:02:48.732 ************************************ 00:02:48.732 END TEST ubsan 00:02:48.732 ************************************ 00:02:48.732 22:47:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:48.732 22:47:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:48.732 22:47:27 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:48.732 22:47:27 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:48.732 22:47:27 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:48.732 22:47:27 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:48.732 22:47:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:48.732 22:47:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.732 ************************************ 00:02:48.732 START TEST build_native_dpdk 00:02:48.732 ************************************ 00:02:48.732 22:47:27 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:48.732 f86085caab app/testpmd: avoid potential outside of array reference 00:02:48.732 4c2e746842 app/testpmd: remove redundant policy action condition 00:02:48.732 357f915ef5 test/eal: fix lcore check 00:02:48.732 b3e64fe596 test/eal: fix loop coverage for alignment macros 00:02:48.732 c6f484adf1 test/crypto: fix TLS zero length record check 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc3 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:48.732 22:47:27 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc3 21.11.0 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 21.11.0 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:48.732 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:48.992 patching file config/rte_config.h 00:02:48.992 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc3 24.07.0 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 24.07.0 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc3 24.07.0 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc3 '>=' 24.07.0 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:48.992 22:47:27 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:48.992 patching file drivers/bus/pci/linux/pci_uio.c 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:48.992 22:47:27 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:53.249 The Meson build system 00:02:53.249 Version: 1.5.0 00:02:53.249 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:53.249 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:53.249 Build type: native build 00:02:53.249 Project name: DPDK 00:02:53.249 Project version: 24.11.0-rc3 00:02:53.249 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:53.249 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:53.249 Host machine cpu family: x86_64 00:02:53.249 Host machine cpu: x86_64 00:02:53.249 Message: ## Building in Developer Mode ## 00:02:53.249 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.249 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:53.249 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.249 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:53.249 Program cat found: YES (/usr/bin/cat) 00:02:53.249 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:53.249 Compiler for C supports arguments -march=native: YES 00:02:53.249 Checking for size of "void *" : 8 00:02:53.249 Checking for size of "void *" : 8 (cached) 00:02:53.249 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:53.249 Library m found: YES 00:02:53.249 Library numa found: YES 00:02:53.249 Has header "numaif.h" : YES 00:02:53.250 Library fdt found: NO 00:02:53.250 Library execinfo found: NO 00:02:53.250 Has header "execinfo.h" : YES 00:02:53.250 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:53.250 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.250 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.250 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.250 Run-time dependency openssl found: YES 3.1.1 00:02:53.250 Run-time dependency libpcap found: YES 1.10.4 00:02:53.250 Has header "pcap.h" with dependency libpcap: YES 00:02:53.250 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.250 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.250 Compiler for C supports arguments -Wformat: YES 00:02:53.250 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:53.250 Compiler for C supports arguments -Wformat-security: NO 00:02:53.250 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.250 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.250 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.250 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.250 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.250 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.250 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.250 Compiler for C supports arguments -Wundef: YES 00:02:53.250 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.250 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.250 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.250 Program objdump found: YES (/usr/bin/objdump) 00:02:53.250 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:53.250 Checking if "AVX512 checking" compiles: YES 00:02:53.250 Fetching value of define "__AVX512F__" : 1 00:02:53.250 Fetching value of define "__AVX512BW__" : 1 00:02:53.250 Fetching value of define "__AVX512DQ__" : 1 00:02:53.250 Fetching value of define "__AVX512VL__" : 1 00:02:53.250 Fetching value of define "__SSE4_2__" : 1 00:02:53.250 Fetching value of define "__AES__" : 1 00:02:53.250 Fetching value of define "__AVX__" : 1 00:02:53.250 Fetching value of define "__AVX2__" : 1 00:02:53.250 Fetching value of define "__AVX512BW__" : 1 00:02:53.250 Fetching value of define "__AVX512CD__" : 1 00:02:53.250 Fetching value of define "__AVX512DQ__" : 1 00:02:53.250 Fetching value of define "__AVX512F__" : 1 00:02:53.250 Fetching value of define "__AVX512VL__" : 1 00:02:53.250 Fetching value of define "__PCLMUL__" : 1 00:02:53.250 Fetching value of define "__RDRND__" : 1 00:02:53.250 Fetching value of define "__RDSEED__" : 1 00:02:53.250 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:53.250 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.250 Message: lib/log: Defining dependency "log" 00:02:53.250 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.250 Message: lib/argparse: Defining dependency "argparse" 00:02:53.250 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.250 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:53.250 Checking for function "getentropy" : NO 00:02:53.250 Message: lib/eal: Defining dependency "eal" 00:02:53.250 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:53.250 Message: lib/ring: Defining dependency "ring" 00:02:53.250 Message: lib/rcu: Defining dependency "rcu" 00:02:53.250 Message: lib/mempool: Defining dependency "mempool" 00:02:53.250 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.250 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.250 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:53.250 Compiler for C supports arguments -mpclmul: YES 00:02:53.250 Compiler for C supports arguments -maes: YES 00:02:53.250 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.250 Message: lib/net: Defining dependency "net" 00:02:53.250 Message: lib/meter: Defining dependency "meter" 00:02:53.250 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.250 Message: lib/pci: Defining dependency "pci" 00:02:53.250 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.250 Message: lib/metrics: Defining dependency "metrics" 00:02:53.250 Message: lib/hash: Defining dependency "hash" 00:02:53.250 Message: lib/timer: Defining dependency "timer" 00:02:53.250 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.250 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.250 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:53.250 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.250 Message: lib/acl: Defining dependency "acl" 00:02:53.250 Message: lib/bbdev: Defining dependency "bbdev" 00:02:53.250 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:53.250 Run-time dependency libelf found: YES 0.191 00:02:53.250 Message: lib/bpf: Defining dependency "bpf" 00:02:53.250 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:53.250 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.250 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.250 Message: lib/distributor: Defining dependency "distributor" 00:02:53.250 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.250 Message: lib/efd: Defining dependency "efd" 00:02:53.250 Message: lib/eventdev: Defining dependency "eventdev" 00:02:53.250 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:53.250 Message: lib/gpudev: Defining dependency "gpudev" 00:02:53.250 Message: lib/gro: Defining dependency "gro" 00:02:53.250 Message: lib/gso: Defining dependency "gso" 00:02:53.250 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:53.250 Message: lib/jobstats: Defining dependency "jobstats" 00:02:53.250 Message: lib/latencystats: Defining dependency "latencystats" 00:02:53.250 Message: lib/lpm: Defining dependency "lpm" 00:02:53.250 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.250 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.250 Fetching value of define "__AVX512IFMA__" : 1 00:02:53.250 Message: lib/member: Defining dependency "member" 00:02:53.250 Message: lib/pcapng: Defining dependency "pcapng" 00:02:53.250 Message: lib/power: Defining dependency "power" 00:02:53.250 Message: lib/rawdev: Defining dependency "rawdev" 00:02:53.250 Message: lib/regexdev: Defining dependency "regexdev" 00:02:53.250 Message: lib/mldev: Defining dependency "mldev" 00:02:53.250 Message: lib/rib: Defining dependency "rib" 00:02:53.250 Message: lib/reorder: Defining dependency "reorder" 00:02:53.250 Message: lib/sched: Defining dependency "sched" 00:02:53.250 Message: lib/security: Defining dependency "security" 00:02:53.250 Message: lib/stack: Defining dependency "stack" 00:02:53.250 Has header "linux/userfaultfd.h" : YES 00:02:53.250 Has header "linux/vduse.h" : YES 00:02:53.250 Message: lib/vhost: Defining dependency "vhost" 00:02:53.250 Message: lib/ipsec: Defining dependency "ipsec" 00:02:53.250 Message: lib/pdcp: Defining dependency "pdcp" 00:02:53.250 Message: lib/fib: Defining dependency "fib" 00:02:53.250 Message: lib/port: Defining dependency "port" 00:02:53.250 Message: lib/pdump: Defining dependency "pdump" 00:02:53.250 Message: lib/table: Defining dependency "table" 00:02:53.250 Message: lib/pipeline: Defining dependency "pipeline" 00:02:53.250 Message: lib/graph: Defining dependency "graph" 00:02:53.250 Message: lib/node: Defining dependency "node" 00:02:53.250 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:53.250 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.250 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.250 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:53.250 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:53.250 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:53.250 Compiler for C supports arguments -Wno-unused-value: YES 00:02:53.250 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:53.250 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:53.250 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:53.251 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:53.251 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:53.251 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:53.251 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:53.818 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:53.818 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:53.818 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:53.818 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:53.818 Has header "sys/epoll.h" : YES 00:02:53.818 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:53.818 Configuring doxy-api-html.conf using configuration 00:02:53.818 Configuring doxy-api-man.conf using configuration 00:02:53.818 Program mandb found: YES (/usr/bin/mandb) 00:02:53.818 Program sphinx-build found: NO 00:02:53.818 Program sphinx-build found: NO 00:02:53.818 Configuring rte_build_config.h using configuration 00:02:53.818 Message: 00:02:53.818 ================= 00:02:53.818 Applications Enabled 00:02:53.818 ================= 00:02:53.818 00:02:53.818 apps: 00:02:53.818 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:53.818 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:53.818 test-pmd, test-regex, test-sad, test-security-perf, 00:02:53.818 00:02:53.818 Message: 00:02:53.818 ================= 00:02:53.818 Libraries Enabled 00:02:53.818 ================= 00:02:53.818 00:02:53.818 libs: 00:02:53.818 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:53.818 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:53.818 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:53.818 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:53.818 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:53.818 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:53.818 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:53.818 graph, node, 00:02:53.818 00:02:53.818 Message: 00:02:53.818 =============== 00:02:53.818 Drivers Enabled 00:02:53.818 =============== 00:02:53.818 00:02:53.818 common: 00:02:53.818 00:02:53.818 bus: 00:02:53.818 pci, vdev, 00:02:53.818 mempool: 00:02:53.818 ring, 00:02:53.818 dma: 00:02:53.818 00:02:53.818 net: 00:02:53.818 i40e, 00:02:53.818 raw: 00:02:53.819 00:02:53.819 crypto: 00:02:53.819 00:02:53.819 compress: 00:02:53.819 00:02:53.819 regex: 00:02:53.819 00:02:53.819 ml: 00:02:53.819 00:02:53.819 vdpa: 00:02:53.819 00:02:53.819 event: 00:02:53.819 00:02:53.819 baseband: 00:02:53.819 00:02:53.819 gpu: 00:02:53.819 00:02:53.819 power: 00:02:53.819 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:53.819 00:02:53.819 Message: 00:02:53.819 ================= 00:02:53.819 Content Skipped 00:02:53.819 ================= 00:02:53.819 00:02:53.819 apps: 00:02:53.819 00:02:53.819 libs: 00:02:53.819 00:02:53.819 drivers: 00:02:53.819 common/cpt: not in enabled drivers build config 00:02:53.819 common/dpaax: not in enabled drivers build config 00:02:53.819 common/iavf: not in enabled drivers build config 00:02:53.819 common/idpf: not in enabled drivers build config 00:02:53.819 common/ionic: not in enabled drivers build config 00:02:53.819 common/mvep: not in enabled drivers build config 00:02:53.819 common/octeontx: not in enabled drivers build config 00:02:53.819 bus/auxiliary: not in enabled drivers build config 00:02:53.819 bus/cdx: not in enabled drivers build config 00:02:53.819 bus/dpaa: not in enabled drivers build config 00:02:53.819 bus/fslmc: not in enabled drivers build config 00:02:53.819 bus/ifpga: not in enabled drivers build config 00:02:53.819 bus/platform: not in enabled drivers build config 00:02:53.819 bus/uacce: not in enabled drivers build config 00:02:53.819 bus/vmbus: not in enabled drivers build config 00:02:53.819 common/cnxk: not in enabled drivers build config 00:02:53.819 common/mlx5: not in enabled drivers build config 00:02:53.819 common/nfp: not in enabled drivers build config 00:02:53.819 common/nitrox: not in enabled drivers build config 00:02:53.819 common/qat: not in enabled drivers build config 00:02:53.819 common/sfc_efx: not in enabled drivers build config 00:02:53.819 mempool/bucket: not in enabled drivers build config 00:02:53.819 mempool/cnxk: not in enabled drivers build config 00:02:53.819 mempool/dpaa: not in enabled drivers build config 00:02:53.819 mempool/dpaa2: not in enabled drivers build config 00:02:53.819 mempool/octeontx: not in enabled drivers build config 00:02:53.819 mempool/stack: not in enabled drivers build config 00:02:53.819 dma/cnxk: not in enabled drivers build config 00:02:53.819 dma/dpaa: not in enabled drivers build config 00:02:53.819 dma/dpaa2: not in enabled drivers build config 00:02:53.819 dma/hisilicon: not in enabled drivers build config 00:02:53.819 dma/idxd: not in enabled drivers build config 00:02:53.819 dma/ioat: not in enabled drivers build config 00:02:53.819 dma/odm: not in enabled drivers build config 00:02:53.819 dma/skeleton: not in enabled drivers build config 00:02:53.819 net/af_packet: not in enabled drivers build config 00:02:53.819 net/af_xdp: not in enabled drivers build config 00:02:53.819 net/ark: not in enabled drivers build config 00:02:53.819 net/atlantic: not in enabled drivers build config 00:02:53.819 net/avp: not in enabled drivers build config 00:02:53.819 net/axgbe: not in enabled drivers build config 00:02:53.819 net/bnx2x: not in enabled drivers build config 00:02:53.819 net/bnxt: not in enabled drivers build config 00:02:53.819 net/bonding: not in enabled drivers build config 00:02:53.819 net/cnxk: not in enabled drivers build config 00:02:53.819 net/cpfl: not in enabled drivers build config 00:02:53.819 net/cxgbe: not in enabled drivers build config 00:02:53.819 net/dpaa: not in enabled drivers build config 00:02:53.819 net/dpaa2: not in enabled drivers build config 00:02:53.819 net/e1000: not in enabled drivers build config 00:02:53.819 net/ena: not in enabled drivers build config 00:02:53.819 net/enetc: not in enabled drivers build config 00:02:53.819 net/enetfec: not in enabled drivers build config 00:02:53.819 net/enic: not in enabled drivers build config 00:02:53.819 net/failsafe: not in enabled drivers build config 00:02:53.819 net/fm10k: not in enabled drivers build config 00:02:53.819 net/gve: not in enabled drivers build config 00:02:53.819 net/hinic: not in enabled drivers build config 00:02:53.819 net/hns3: not in enabled drivers build config 00:02:53.819 net/iavf: not in enabled drivers build config 00:02:53.819 net/ice: not in enabled drivers build config 00:02:53.819 net/idpf: not in enabled drivers build config 00:02:53.819 net/igc: not in enabled drivers build config 00:02:53.819 net/ionic: not in enabled drivers build config 00:02:53.819 net/ipn3ke: not in enabled drivers build config 00:02:53.819 net/ixgbe: not in enabled drivers build config 00:02:53.819 net/mana: not in enabled drivers build config 00:02:53.819 net/memif: not in enabled drivers build config 00:02:53.819 net/mlx4: not in enabled drivers build config 00:02:53.819 net/mlx5: not in enabled drivers build config 00:02:53.819 net/mvneta: not in enabled drivers build config 00:02:53.819 net/mvpp2: not in enabled drivers build config 00:02:53.819 net/netvsc: not in enabled drivers build config 00:02:53.819 net/nfb: not in enabled drivers build config 00:02:53.819 net/nfp: not in enabled drivers build config 00:02:53.819 net/ngbe: not in enabled drivers build config 00:02:53.819 net/ntnic: not in enabled drivers build config 00:02:53.819 net/null: not in enabled drivers build config 00:02:53.819 net/octeontx: not in enabled drivers build config 00:02:53.819 net/octeon_ep: not in enabled drivers build config 00:02:53.819 net/pcap: not in enabled drivers build config 00:02:53.819 net/pfe: not in enabled drivers build config 00:02:53.819 net/qede: not in enabled drivers build config 00:02:53.819 net/r8169: not in enabled drivers build config 00:02:53.819 net/ring: not in enabled drivers build config 00:02:53.819 net/sfc: not in enabled drivers build config 00:02:53.819 net/softnic: not in enabled drivers build config 00:02:53.819 net/tap: not in enabled drivers build config 00:02:53.819 net/thunderx: not in enabled drivers build config 00:02:53.819 net/txgbe: not in enabled drivers build config 00:02:53.819 net/vdev_netvsc: not in enabled drivers build config 00:02:53.819 net/vhost: not in enabled drivers build config 00:02:53.819 net/virtio: not in enabled drivers build config 00:02:53.819 net/vmxnet3: not in enabled drivers build config 00:02:53.819 net/zxdh: not in enabled drivers build config 00:02:53.819 raw/cnxk_bphy: not in enabled drivers build config 00:02:53.819 raw/cnxk_gpio: not in enabled drivers build config 00:02:53.819 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:53.819 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:53.819 raw/gdtc: not in enabled drivers build config 00:02:53.819 raw/ifpga: not in enabled drivers build config 00:02:53.819 raw/ntb: not in enabled drivers build config 00:02:53.819 raw/skeleton: not in enabled drivers build config 00:02:53.819 crypto/armv8: not in enabled drivers build config 00:02:53.819 crypto/bcmfs: not in enabled drivers build config 00:02:53.819 crypto/caam_jr: not in enabled drivers build config 00:02:53.819 crypto/ccp: not in enabled drivers build config 00:02:53.819 crypto/cnxk: not in enabled drivers build config 00:02:53.819 crypto/dpaa_sec: not in enabled drivers build config 00:02:53.819 crypto/dpaa2_sec: not in enabled drivers build config 00:02:53.819 crypto/ionic: not in enabled drivers build config 00:02:53.819 crypto/ipsec_mb: not in enabled drivers build config 00:02:53.819 crypto/mlx5: not in enabled drivers build config 00:02:53.819 crypto/mvsam: not in enabled drivers build config 00:02:53.819 crypto/nitrox: not in enabled drivers build config 00:02:53.819 crypto/null: not in enabled drivers build config 00:02:53.819 crypto/octeontx: not in enabled drivers build config 00:02:53.819 crypto/openssl: not in enabled drivers build config 00:02:53.819 crypto/scheduler: not in enabled drivers build config 00:02:53.819 crypto/uadk: not in enabled drivers build config 00:02:53.819 crypto/virtio: not in enabled drivers build config 00:02:53.819 compress/isal: not in enabled drivers build config 00:02:53.819 compress/mlx5: not in enabled drivers build config 00:02:53.819 compress/nitrox: not in enabled drivers build config 00:02:53.819 compress/octeontx: not in enabled drivers build config 00:02:53.819 compress/uadk: not in enabled drivers build config 00:02:53.819 compress/zlib: not in enabled drivers build config 00:02:53.819 regex/mlx5: not in enabled drivers build config 00:02:53.819 regex/cn9k: not in enabled drivers build config 00:02:53.819 ml/cnxk: not in enabled drivers build config 00:02:53.819 vdpa/ifc: not in enabled drivers build config 00:02:53.819 vdpa/mlx5: not in enabled drivers build config 00:02:53.819 vdpa/nfp: not in enabled drivers build config 00:02:53.819 vdpa/sfc: not in enabled drivers build config 00:02:53.819 event/cnxk: not in enabled drivers build config 00:02:53.819 event/dlb2: not in enabled drivers build config 00:02:53.819 event/dpaa: not in enabled drivers build config 00:02:53.819 event/dpaa2: not in enabled drivers build config 00:02:53.819 event/dsw: not in enabled drivers build config 00:02:53.819 event/opdl: not in enabled drivers build config 00:02:53.819 event/skeleton: not in enabled drivers build config 00:02:53.819 event/sw: not in enabled drivers build config 00:02:53.819 event/octeontx: not in enabled drivers build config 00:02:53.819 baseband/acc: not in enabled drivers build config 00:02:53.819 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:53.819 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:53.819 baseband/la12xx: not in enabled drivers build config 00:02:53.819 baseband/null: not in enabled drivers build config 00:02:53.819 baseband/turbo_sw: not in enabled drivers build config 00:02:53.819 gpu/cuda: not in enabled drivers build config 00:02:53.819 power/amd_uncore: not in enabled drivers build config 00:02:53.819 00:02:53.819 00:02:53.819 Message: DPDK build config complete: 00:02:53.819 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:53.819 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:53.819 Build targets in project: 244 00:02:53.819 00:02:53.819 DPDK 24.11.0-rc3 00:02:53.819 00:02:53.819 User defined options 00:02:53.819 libdir : lib 00:02:53.819 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:53.819 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:53.819 c_link_args : 00:02:53.819 enable_docs : false 00:02:53.819 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:53.819 enable_kmods : false 00:02:54.758 machine : native 00:02:54.758 tests : false 00:02:54.758 00:02:54.758 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.758 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:54.758 22:47:33 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:54.758 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:54.758 [1/764] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:54.758 [2/764] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:54.758 [3/764] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:54.758 [4/764] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:54.758 [5/764] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:54.758 [6/764] Linking static target lib/librte_kvargs.a 00:02:54.758 [7/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:55.016 [8/764] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:55.016 [9/764] Linking static target lib/librte_log.a 00:02:55.016 [10/764] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:55.016 [11/764] Linking static target lib/librte_argparse.a 00:02:55.016 [12/764] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.016 [13/764] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.016 [14/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.016 [15/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:55.016 [16/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.016 [17/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:55.016 [18/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.275 [19/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.275 [20/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.275 [21/764] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.275 [22/764] Linking target lib/librte_log.so.25.0 00:02:55.275 [23/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:55.533 [24/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:55.533 [25/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:55.533 [26/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:55.533 [27/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.533 [28/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:55.533 [29/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:55.533 [30/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:55.533 [31/764] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:55.533 [32/764] Linking target lib/librte_kvargs.so.25.0 00:02:55.792 [33/764] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:55.792 [34/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:55.792 [35/764] Linking static target lib/librte_telemetry.a 00:02:55.792 [36/764] Linking target lib/librte_argparse.so.25.0 00:02:55.792 [37/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:55.792 [38/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:55.792 [39/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:55.792 [40/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:55.792 [41/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.051 [42/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:56.051 [43/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:56.051 [44/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:56.051 [45/764] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.051 [46/764] Linking target lib/librte_telemetry.so.25.0 00:02:56.051 [47/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:56.051 [48/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:56.051 [49/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:56.051 [50/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:56.051 [51/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:56.051 [52/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:56.309 [53/764] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:56.309 [54/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:56.309 [55/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:56.309 [56/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:56.309 [57/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:56.567 [58/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:56.567 [59/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:56.567 [60/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:56.567 [61/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:56.567 [62/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:56.567 [63/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:56.567 [64/764] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.567 [65/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:56.567 [66/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:56.826 [67/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:56.826 [68/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.826 [69/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.826 [70/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:56.826 [71/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:56.826 [72/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.826 [73/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.826 [74/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.826 [75/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.084 [76/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:57.084 [77/764] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:57.084 [78/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:57.084 [79/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:57.084 [80/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:57.084 [81/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:57.084 [82/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.084 [83/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:57.342 [84/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.343 [85/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:57.343 [86/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:57.343 [87/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.343 [88/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:57.343 [89/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.343 [90/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:57.602 [91/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.602 [92/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.602 [93/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.602 [94/764] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:57.602 [95/764] Linking static target lib/librte_ring.a 00:02:57.602 [96/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.860 [97/764] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:57.860 [98/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.860 [99/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.860 [100/764] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.860 [101/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.860 [102/764] Linking static target lib/librte_eal.a 00:02:57.860 [103/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.860 [104/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.860 [105/764] Linking static target lib/librte_mempool.a 00:02:58.118 [106/764] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:58.119 [107/764] Linking static target lib/librte_rcu.a 00:02:58.119 [108/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:58.119 [109/764] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:58.119 [110/764] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.119 [111/764] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:58.119 [112/764] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:58.119 [113/764] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.119 [114/764] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.378 [115/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.378 [116/764] Linking static target lib/librte_mbuf.a 00:02:58.378 [117/764] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:58.378 [118/764] Linking static target lib/librte_net.a 00:02:58.378 [119/764] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.378 [120/764] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.378 [121/764] Linking static target lib/librte_meter.a 00:02:58.378 [122/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.636 [123/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.636 [124/764] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.636 [125/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.636 [126/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.636 [127/764] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.636 [128/764] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.894 [129/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.894 [130/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.152 [131/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.152 [132/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.152 [133/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:59.152 [134/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.410 [135/764] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.410 [136/764] Linking static target lib/librte_pci.a 00:02:59.410 [137/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.410 [138/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.410 [139/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.410 [140/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.410 [141/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.410 [142/764] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.410 [143/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:59.410 [144/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.410 [145/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.668 [146/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.668 [147/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.668 [148/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.668 [149/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.668 [150/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.668 [151/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.668 [152/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.668 [153/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.668 [154/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.668 [155/764] Linking static target lib/librte_cmdline.a 00:02:59.925 [156/764] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:59.925 [157/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:59.925 [158/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:59.925 [159/764] Linking static target lib/librte_metrics.a 00:02:59.925 [160/764] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:59.925 [161/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.183 [162/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.183 [163/764] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.183 [164/764] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.440 [165/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:03:00.441 [166/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.441 [167/764] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.441 [168/764] Linking static target lib/librte_timer.a 00:03:00.698 [169/764] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.698 [170/764] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:00.698 [171/764] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:00.698 [172/764] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:00.698 [173/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:00.956 [174/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:01.243 [175/764] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:01.243 [176/764] Linking static target lib/librte_bitratestats.a 00:03:01.243 [177/764] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:01.243 [178/764] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.502 [179/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:01.502 [180/764] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:01.502 [181/764] Linking static target lib/librte_bbdev.a 00:03:01.502 [182/764] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.502 [183/764] Linking static target lib/librte_hash.a 00:03:01.502 [184/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:01.759 [185/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:01.759 [186/764] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:01.759 [187/764] Linking static target lib/acl/libavx2_tmp.a 00:03:01.759 [188/764] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.759 [189/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:01.759 [190/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:02.018 [191/764] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.018 [192/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:02.018 [193/764] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:02.018 [194/764] Linking static target lib/librte_ethdev.a 00:03:02.018 [195/764] Linking static target lib/librte_cfgfile.a 00:03:02.018 [196/764] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.018 [197/764] Linking target lib/librte_eal.so.25.0 00:03:02.276 [198/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:02.276 [199/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:02.276 [200/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:02.276 [201/764] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:02.276 [202/764] Linking target lib/librte_ring.so.25.0 00:03:02.276 [203/764] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.276 [204/764] Linking target lib/librte_meter.so.25.0 00:03:02.276 [205/764] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:02.276 [206/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:02.276 [207/764] Linking target lib/librte_rcu.so.25.0 00:03:02.276 [208/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:02.276 [209/764] Linking target lib/librte_mempool.so.25.0 00:03:02.276 [210/764] Linking target lib/librte_pci.so.25.0 00:03:02.276 [211/764] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:02.534 [212/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.534 [213/764] Linking target lib/librte_timer.so.25.0 00:03:02.534 [214/764] Linking target lib/librte_cfgfile.so.25.0 00:03:02.534 [215/764] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:02.534 [216/764] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:02.534 [217/764] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:02.534 [218/764] Linking target lib/librte_mbuf.so.25.0 00:03:02.534 [219/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.534 [220/764] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:02.534 [221/764] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:02.534 [222/764] Linking target lib/librte_net.so.25.0 00:03:02.534 [223/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:02.534 [224/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:02.534 [225/764] Linking static target lib/librte_acl.a 00:03:02.793 [226/764] Linking target lib/librte_bbdev.so.25.0 00:03:02.793 [227/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:02.793 [228/764] Linking static target lib/librte_compressdev.a 00:03:02.793 [229/764] Linking static target lib/librte_bpf.a 00:03:02.793 [230/764] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:02.793 [231/764] Linking target lib/librte_cmdline.so.25.0 00:03:02.793 [232/764] Linking target lib/librte_hash.so.25.0 00:03:02.793 [233/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:02.793 [234/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:02.793 [235/764] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:02.793 [236/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:02.793 [237/764] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.051 [238/764] Linking target lib/librte_acl.so.25.0 00:03:03.051 [239/764] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.051 [240/764] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:03.051 [241/764] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.051 [242/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:03.051 [243/764] Linking static target lib/librte_distributor.a 00:03:03.051 [244/764] Linking target lib/librte_compressdev.so.25.0 00:03:03.051 [245/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.310 [246/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:03.310 [247/764] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.310 [248/764] Linking target lib/librte_distributor.so.25.0 00:03:03.568 [249/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:03.568 [250/764] Linking static target lib/librte_dmadev.a 00:03:03.568 [251/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:03.568 [252/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:03.826 [253/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:03.826 [254/764] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:03.826 [255/764] Linking static target lib/librte_efd.a 00:03:04.084 [256/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:04.084 [257/764] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.084 [258/764] Linking target lib/librte_dmadev.so.25.0 00:03:04.084 [259/764] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.084 [260/764] Linking target lib/librte_efd.so.25.0 00:03:04.084 [261/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.084 [262/764] Linking static target lib/librte_cryptodev.a 00:03:04.084 [263/764] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:04.084 [264/764] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:04.084 [265/764] Linking static target lib/librte_dispatcher.a 00:03:04.343 [266/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:04.343 [267/764] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:04.601 [268/764] Linking static target lib/librte_gpudev.a 00:03:04.601 [269/764] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.601 [270/764] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:04.601 [271/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:04.601 [272/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:04.601 [273/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:04.859 [274/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:04.859 [275/764] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:04.859 [276/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:05.117 [277/764] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:05.117 [278/764] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.117 [279/764] Linking target lib/librte_cryptodev.so.25.0 00:03:05.117 [280/764] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.117 [281/764] Linking target lib/librte_gpudev.so.25.0 00:03:05.117 [282/764] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:05.117 [283/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:05.117 [284/764] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:05.117 [285/764] Linking static target lib/librte_eventdev.a 00:03:05.117 [286/764] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:05.117 [287/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:05.375 [288/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:05.375 [289/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:05.375 [290/764] Linking static target lib/librte_gro.a 00:03:05.375 [291/764] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:05.375 [292/764] Linking static target lib/librte_gso.a 00:03:05.375 [293/764] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.375 [294/764] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.375 [295/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:05.634 [296/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:05.634 [297/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:05.634 [298/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:05.634 [299/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:05.634 [300/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:05.634 [301/764] Linking static target lib/librte_ip_frag.a 00:03:05.893 [302/764] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:05.893 [303/764] Linking static target lib/librte_jobstats.a 00:03:05.893 [304/764] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.893 [305/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:05.893 [306/764] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:05.893 [307/764] Linking static target lib/librte_latencystats.a 00:03:05.893 [308/764] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.152 [309/764] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:06.152 [310/764] Linking target lib/librte_ethdev.so.25.0 00:03:06.152 [311/764] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.152 [312/764] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.152 [313/764] Linking target lib/librte_jobstats.so.25.0 00:03:06.152 [314/764] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:06.152 [315/764] Linking target lib/librte_metrics.so.25.0 00:03:06.152 [316/764] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:06.152 [317/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:06.152 [318/764] Linking target lib/librte_bpf.so.25.0 00:03:06.410 [319/764] Linking target lib/librte_gro.so.25.0 00:03:06.410 [320/764] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:06.410 [321/764] Linking target lib/librte_gso.so.25.0 00:03:06.410 [322/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:06.410 [323/764] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:06.410 [324/764] Linking static target lib/librte_lpm.a 00:03:06.410 [325/764] Linking target lib/librte_bitratestats.so.25.0 00:03:06.410 [326/764] Linking target lib/librte_latencystats.so.25.0 00:03:06.410 [327/764] Linking target lib/librte_ip_frag.so.25.0 00:03:06.410 [328/764] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:03:06.410 [329/764] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:03:06.410 [330/764] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:06.702 [331/764] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:06.702 [332/764] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.702 [333/764] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:06.702 [334/764] Linking target lib/librte_lpm.so.25.0 00:03:06.702 [335/764] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:06.702 [336/764] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:06.702 [337/764] Linking static target lib/librte_pcapng.a 00:03:06.702 [338/764] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:06.702 [339/764] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:06.985 [340/764] Linking static target lib/librte_power.a 00:03:06.985 [341/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:06.985 [342/764] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.985 [343/764] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:06.985 [344/764] Linking static target lib/librte_regexdev.a 00:03:06.985 [345/764] Linking target lib/librte_eventdev.so.25.0 00:03:06.986 [346/764] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:06.986 [347/764] Linking static target lib/librte_rawdev.a 00:03:06.986 [348/764] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.986 [349/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:06.986 [350/764] Linking target lib/librte_pcapng.so.25.0 00:03:06.986 [351/764] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:06.986 [352/764] Linking target lib/librte_dispatcher.so.25.0 00:03:06.986 [353/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:06.986 [354/764] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:06.986 [355/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:06.986 [356/764] Linking static target lib/librte_member.a 00:03:07.244 [357/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:07.244 [358/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:07.244 [359/764] Linking static target lib/librte_mldev.a 00:03:07.244 [360/764] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.244 [361/764] Linking target lib/librte_rawdev.so.25.0 00:03:07.502 [362/764] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.502 [363/764] Linking target lib/librte_member.so.25.0 00:03:07.502 [364/764] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:07.502 [365/764] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:07.502 [366/764] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.502 [367/764] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.502 [368/764] Linking target lib/librte_regexdev.so.25.0 00:03:07.502 [369/764] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:07.502 [370/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:07.502 [371/764] Linking target lib/librte_power.so.25.0 00:03:07.502 [372/764] Linking static target lib/librte_reorder.a 00:03:07.760 [373/764] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:03:07.760 [374/764] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:07.760 [375/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:07.760 [376/764] Linking static target lib/librte_rib.a 00:03:07.760 [377/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:07.760 [378/764] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.760 [379/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:07.760 [380/764] Linking target lib/librte_reorder.so.25.0 00:03:07.760 [381/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:07.760 [382/764] Linking static target lib/librte_stack.a 00:03:07.760 [383/764] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:07.761 [384/764] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:08.019 [385/764] Linking static target lib/librte_security.a 00:03:08.019 [386/764] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:08.019 [387/764] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.019 [388/764] Linking target lib/librte_stack.so.25.0 00:03:08.019 [389/764] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.019 [390/764] Linking target lib/librte_rib.so.25.0 00:03:08.277 [391/764] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:08.277 [392/764] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:08.277 [393/764] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.277 [394/764] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:08.277 [395/764] Linking target lib/librte_security.so.25.0 00:03:08.277 [396/764] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:08.277 [397/764] Linking static target lib/librte_sched.a 00:03:08.277 [398/764] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:08.277 [399/764] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:08.277 [400/764] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.535 [401/764] Linking target lib/librte_mldev.so.25.0 00:03:08.535 [402/764] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.535 [403/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:08.535 [404/764] Linking target lib/librte_sched.so.25.0 00:03:08.793 [405/764] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:08.793 [406/764] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:08.793 [407/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:08.793 [408/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:08.793 [409/764] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:09.052 [410/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.052 [411/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:09.052 [412/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:09.336 [413/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:09.336 [414/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:09.336 [415/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:09.336 [416/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:09.595 [417/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:09.595 [418/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:09.595 [419/764] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:09.595 [420/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:09.595 [421/764] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:09.853 [422/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:09.853 [423/764] Linking static target lib/librte_ipsec.a 00:03:09.853 [424/764] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:10.111 [425/764] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.111 [426/764] Linking target lib/librte_ipsec.so.25.0 00:03:10.111 [427/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:10.111 [428/764] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:10.111 [429/764] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:10.111 [430/764] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:10.111 [431/764] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:10.369 [432/764] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:10.369 [433/764] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:10.626 [434/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:10.627 [435/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:10.627 [436/764] Linking static target lib/librte_fib.a 00:03:10.627 [437/764] Linking static target lib/librte_pdcp.a 00:03:10.627 [438/764] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:10.627 [439/764] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:10.627 [440/764] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.884 [441/764] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:10.884 [442/764] Linking target lib/librte_pdcp.so.25.0 00:03:10.884 [443/764] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.884 [444/764] Linking target lib/librte_fib.so.25.0 00:03:10.884 [445/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:10.884 [446/764] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:10.884 [447/764] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:11.142 [448/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:11.400 [449/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:11.400 [450/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:11.400 [451/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:11.400 [452/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:11.400 [453/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:11.400 [454/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:11.659 [455/764] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:11.659 [456/764] Linking static target lib/librte_port.a 00:03:11.659 [457/764] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:11.659 [458/764] Linking static target lib/librte_pdump.a 00:03:11.659 [459/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:11.659 [460/764] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:11.659 [461/764] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:11.659 [462/764] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.918 [463/764] Linking target lib/librte_pdump.so.25.0 00:03:11.918 [464/764] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.918 [465/764] Linking target lib/librte_port.so.25.0 00:03:11.918 [466/764] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:12.176 [467/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:12.176 [468/764] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:12.176 [469/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:12.176 [470/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:12.176 [471/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:12.176 [472/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:12.176 [473/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.462 [474/764] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:12.462 [475/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:12.462 [476/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:12.462 [477/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:12.462 [478/764] Linking static target lib/librte_table.a 00:03:12.734 [479/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:12.734 [480/764] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:12.992 [481/764] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.992 [482/764] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:12.992 [483/764] Linking target lib/librte_table.so.25.0 00:03:12.993 [484/764] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:12.993 [485/764] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:12.993 [486/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:12.993 [487/764] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:13.250 [488/764] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:13.250 [489/764] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:13.250 [490/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:13.250 [491/764] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:13.508 [492/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:13.508 [493/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:13.508 [494/764] Linking static target lib/librte_graph.a 00:03:13.508 [495/764] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:13.767 [496/764] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:13.767 [497/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:13.767 [498/764] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:13.767 [499/764] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:14.025 [500/764] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.025 [501/764] Linking target lib/librte_graph.so.25.0 00:03:14.025 [502/764] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:14.025 [503/764] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:14.025 [504/764] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:14.283 [505/764] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:14.283 [506/764] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:14.283 [507/764] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:14.283 [508/764] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:14.283 [509/764] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:14.542 [510/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:14.542 [511/764] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:14.542 [512/764] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:14.542 [513/764] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:14.542 [514/764] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:14.800 [515/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:14.800 [516/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:14.800 [517/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:14.800 [518/764] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:14.800 [519/764] Linking static target lib/librte_node.a 00:03:14.800 [520/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:14.800 [521/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:15.059 [522/764] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.059 [523/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:15.059 [524/764] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:15.059 [525/764] Linking target lib/librte_node.so.25.0 00:03:15.059 [526/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:15.059 [527/764] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:15.317 [528/764] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:15.317 [529/764] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.318 [530/764] Linking static target drivers/librte_bus_vdev.a 00:03:15.318 [531/764] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:15.318 [532/764] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.318 [533/764] Linking static target drivers/librte_bus_pci.a 00:03:15.318 [534/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:15.318 [535/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:15.318 [536/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:15.318 [537/764] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.318 [538/764] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.318 [539/764] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.576 [540/764] Linking target drivers/librte_bus_vdev.so.25.0 00:03:15.576 [541/764] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:15.576 [542/764] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:15.576 [543/764] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:15.576 [544/764] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.576 [545/764] Linking target drivers/librte_bus_pci.so.25.0 00:03:15.576 [546/764] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:15.576 [547/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:15.576 [548/764] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:15.576 [549/764] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:15.576 [550/764] Linking static target drivers/librte_mempool_ring.a 00:03:15.576 [551/764] Linking target drivers/librte_mempool_ring.so.25.0 00:03:15.834 [552/764] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:15.834 [553/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:16.093 [554/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:16.351 [555/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:16.351 [556/764] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:16.610 [557/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:16.869 [558/764] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:16.869 [559/764] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:16.869 [560/764] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:16.869 [561/764] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:16.869 [562/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:17.127 [563/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:17.128 [564/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:17.386 [565/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:17.386 [566/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:17.386 [567/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:17.386 [568/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:17.645 [569/764] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:03:17.645 [570/764] Linking static target drivers/libtmp_rte_power_acpi.a 00:03:17.645 [571/764] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:03:17.645 [572/764] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:03:17.904 [573/764] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:03:17.904 [574/764] Linking static target drivers/libtmp_rte_power_cppc.a 00:03:17.904 [575/764] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:03:17.904 [576/764] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:17.904 [577/764] Linking static target drivers/librte_power_acpi.a 00:03:17.904 [578/764] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:03:17.904 [579/764] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:17.904 [580/764] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:17.904 [581/764] Linking static target drivers/librte_power_amd_pstate.a 00:03:17.904 [582/764] Linking target drivers/librte_power_acpi.so.25.0 00:03:17.904 [583/764] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:17.904 [584/764] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:03:17.904 [585/764] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:03:17.904 [586/764] Linking target drivers/librte_power_amd_pstate.so.25.0 00:03:17.904 [587/764] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:03:17.904 [588/764] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:17.904 [589/764] Linking static target drivers/librte_power_cppc.a 00:03:17.904 [590/764] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:17.904 [591/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:03:17.904 [592/764] Linking target drivers/librte_power_cppc.so.25.0 00:03:17.904 [593/764] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:03:17.904 [594/764] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:17.904 [595/764] Linking static target drivers/librte_power_intel_pstate.a 00:03:18.163 [596/764] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:18.163 [597/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:03:18.163 [598/764] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:03:18.163 [599/764] Linking target drivers/librte_power_intel_pstate.so.25.0 00:03:18.163 [600/764] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:18.163 [601/764] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:03:18.163 [602/764] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:18.164 [603/764] Linking static target drivers/librte_power_kvm_vm.a 00:03:18.164 [604/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:18.164 [605/764] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:18.164 [606/764] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:03:18.164 [607/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:18.164 [608/764] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:03:18.423 [609/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:18.423 [610/764] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.423 [611/764] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:03:18.423 [612/764] Linking target drivers/librte_power_kvm_vm.so.25.0 00:03:18.423 [613/764] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:18.423 [614/764] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:18.423 [615/764] Linking static target drivers/librte_power_intel_uncore.a 00:03:18.423 [616/764] Linking target drivers/librte_power_intel_uncore.so.25.0 00:03:18.423 [617/764] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:18.423 [618/764] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:18.726 [619/764] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:18.726 [620/764] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:18.726 [621/764] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:18.726 [622/764] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:18.726 [623/764] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:18.726 [624/764] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:18.983 [625/764] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:18.983 [626/764] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:18.983 [627/764] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:18.983 [628/764] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:18.983 [629/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:18.983 [630/764] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:19.241 [631/764] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:19.241 [632/764] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:19.241 [633/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:19.241 [634/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:19.241 [635/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:19.241 [636/764] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:19.241 [637/764] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:19.241 [638/764] Linking static target drivers/librte_net_i40e.a 00:03:19.499 [639/764] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:19.499 [640/764] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:19.499 [641/764] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:19.759 [642/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:19.759 [643/764] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:19.759 [644/764] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.759 [645/764] Linking target drivers/librte_net_i40e.so.25.0 00:03:19.759 [646/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:19.759 [647/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:20.017 [648/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:20.276 [649/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:20.276 [650/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:20.276 [651/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:20.276 [652/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:20.534 [653/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:20.534 [654/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:20.534 [655/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:20.793 [656/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:20.793 [657/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:20.793 [658/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:20.793 [659/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:20.793 [660/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:20.793 [661/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:21.051 [662/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:21.051 [663/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:21.051 [664/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:21.051 [665/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:21.310 [666/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:21.310 [667/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:21.310 [668/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:21.569 [669/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:21.569 [670/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:21.569 [671/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:22.136 [672/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:22.136 [673/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:22.136 [674/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:22.394 [675/764] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:22.394 [676/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:22.394 [677/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:22.394 [678/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.394 [679/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:22.394 [680/764] Linking static target lib/librte_pipeline.a 00:03:22.394 [681/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:22.394 [682/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:22.651 [683/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:22.651 [684/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:22.651 [685/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:22.651 [686/764] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:22.651 [687/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.651 [688/764] Linking static target lib/librte_vhost.a 00:03:22.651 [689/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:22.909 [690/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:22.909 [691/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:22.909 [692/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:22.909 [693/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:23.168 [694/764] Linking target app/dpdk-dumpcap 00:03:23.168 [695/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:23.168 [696/764] Linking target app/dpdk-pdump 00:03:23.168 [697/764] Linking target app/dpdk-graph 00:03:23.168 [698/764] Linking target app/dpdk-proc-info 00:03:23.426 [699/764] Linking target app/dpdk-test-acl 00:03:23.426 [700/764] Linking target app/dpdk-test-cmdline 00:03:23.426 [701/764] Linking target app/dpdk-test-compress-perf 00:03:23.426 [702/764] Linking target app/dpdk-test-crypto-perf 00:03:23.426 [703/764] Linking target app/dpdk-test-dma-perf 00:03:23.684 [704/764] Linking target app/dpdk-test-flow-perf 00:03:23.684 [705/764] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.684 [706/764] Linking target app/dpdk-test-fib 00:03:23.684 [707/764] Linking target lib/librte_vhost.so.25.0 00:03:23.684 [708/764] Linking target app/dpdk-test-gpudev 00:03:23.684 [709/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:23.684 [710/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:23.684 [711/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:23.941 [712/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:23.941 [713/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:23.941 [714/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:23.941 [715/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:23.941 [716/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:24.198 [717/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:24.198 [718/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:24.198 [719/764] Linking target app/dpdk-test-eventdev 00:03:24.198 [720/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:24.198 [721/764] Linking target app/dpdk-test-bbdev 00:03:24.456 [722/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:24.456 [723/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:24.456 [724/764] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:24.456 [725/764] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.456 [726/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:24.456 [727/764] Linking target lib/librte_pipeline.so.25.0 00:03:24.714 [728/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:24.714 [729/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:24.714 [730/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:24.714 [731/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:24.972 [732/764] Linking target app/dpdk-test-pipeline 00:03:24.972 [733/764] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:24.972 [734/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:24.972 [735/764] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:03:25.230 [736/764] Linking target app/dpdk-test-mldev 00:03:25.230 [737/764] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:25.230 [738/764] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:25.230 [739/764] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:25.553 [740/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:25.553 [741/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:25.553 [742/764] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:25.553 [743/764] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:25.553 [744/764] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:25.827 [745/764] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:25.827 [746/764] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:25.827 [747/764] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:26.086 [748/764] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:26.345 [749/764] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:26.345 [750/764] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:26.345 [751/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:26.345 [752/764] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:26.345 [753/764] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:26.604 [754/764] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:26.604 [755/764] Linking target app/dpdk-test-sad 00:03:26.604 [756/764] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:26.604 [757/764] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:26.604 [758/764] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:26.862 [759/764] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:26.862 [760/764] Linking target app/dpdk-test-regex 00:03:26.862 [761/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:27.121 [762/764] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:27.379 [763/764] Linking target app/dpdk-testpmd 00:03:27.379 [764/764] Linking target app/dpdk-test-security-perf 00:03:27.379 22:48:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:27.379 22:48:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:27.379 22:48:06 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:27.379 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:27.379 [0/1] Installing files. 00:03:27.641 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.641 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.641 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.642 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.643 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.644 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.645 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.645 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.645 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.646 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.906 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:27.906 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.906 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.907 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.168 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:28.169 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:28.169 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:28.169 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:28.169 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:28.169 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:28.169 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:28.169 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:28.169 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:28.169 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:28.169 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:28.169 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:28.169 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:28.169 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:28.169 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:28.169 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:28.169 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:28.169 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:28.169 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:28.169 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:28.169 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:28.169 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:28.169 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:28.169 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:28.170 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:28.170 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:28.170 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:28.170 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:28.170 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:28.170 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:28.170 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:28.170 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:28.170 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:28.170 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:28.170 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:28.170 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:28.170 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:28.170 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:28.170 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:28.170 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:28.170 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:28.170 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:28.170 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:28.170 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:28.170 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:28.170 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:28.170 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:28.170 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:28.170 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:28.170 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:28.170 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:28.170 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:28.170 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:28.170 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:28.170 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:28.170 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:28.170 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:28.170 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:28.170 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:28.170 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:28.170 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:28.170 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:28.170 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:28.170 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:28.170 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:28.170 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:28.170 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:28.170 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:28.170 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:28.170 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:28.170 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:28.170 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:28.170 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:28.170 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:28.170 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:28.170 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:28.170 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:28.170 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:28.170 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:28.170 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:28.170 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:28.170 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:28.170 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:28.170 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:28.170 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:28.170 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:28.170 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:28.170 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:28.170 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:28.170 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:28.170 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:28.170 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:28.170 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:28.170 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:28.170 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:28.170 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:28.170 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:28.170 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:28.170 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:28.170 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:28.170 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:28.170 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:28.170 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:28.170 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:28.170 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:28.170 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:28.170 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:28.170 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:28.170 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:28.170 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:28.170 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:28.170 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:28.170 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:28.170 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:28.170 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:28.170 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:28.170 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:28.170 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:28.170 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:28.170 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:28.170 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:28.170 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:28.170 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:28.170 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:28.170 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:28.170 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:28.170 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:28.170 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:28.170 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:28.170 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:28.170 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:28.170 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:28.170 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:28.170 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:28.171 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:28.171 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:28.171 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:28.171 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:28.171 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:28.171 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:28.171 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:28.171 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:28.171 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:28.171 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:28.171 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:28.171 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:28.171 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:28.171 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:28.171 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:28.171 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:28.171 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:28.171 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:28.171 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:28.171 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:28.171 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:28.171 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:28.171 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:28.171 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:28.171 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:28.171 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:28.171 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:28.171 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:28.171 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:28.171 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:28.171 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:28.171 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:28.171 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:28.171 22:48:07 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:28.171 22:48:07 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:28.171 00:03:28.171 real 0m39.292s 00:03:28.171 user 4m35.375s 00:03:28.171 sys 0m41.647s 00:03:28.171 22:48:07 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:28.171 ************************************ 00:03:28.171 END TEST build_native_dpdk 00:03:28.171 ************************************ 00:03:28.171 22:48:07 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:28.171 22:48:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:28.171 22:48:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:28.171 22:48:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:28.171 22:48:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:28.171 22:48:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:28.171 22:48:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:28.171 22:48:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:28.171 22:48:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:28.171 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:28.429 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:28.429 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:28.429 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:28.688 Using 'verbs' RDMA provider 00:03:39.649 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:51.885 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:51.885 Creating mk/config.mk...done. 00:03:51.885 Creating mk/cc.flags.mk...done. 00:03:51.885 Type 'make' to build. 00:03:51.885 22:48:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:51.885 22:48:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:51.885 22:48:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:51.885 22:48:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.885 ************************************ 00:03:51.885 START TEST make 00:03:51.885 ************************************ 00:03:51.885 22:48:29 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:51.885 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:51.885 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:51.885 meson setup builddir \ 00:03:51.885 -Dwith-libaio=enabled \ 00:03:51.885 -Dwith-liburing=enabled \ 00:03:51.885 -Dwith-libvfn=disabled \ 00:03:51.885 -Dwith-spdk=disabled \ 00:03:51.885 -Dexamples=false \ 00:03:51.885 -Dtests=false \ 00:03:51.885 -Dtools=false && \ 00:03:51.885 meson compile -C builddir && \ 00:03:51.885 cd -) 00:03:51.885 make[1]: Nothing to be done for 'all'. 00:03:52.451 The Meson build system 00:03:52.451 Version: 1.5.0 00:03:52.452 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:52.452 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:52.452 Build type: native build 00:03:52.452 Project name: xnvme 00:03:52.452 Project version: 0.7.5 00:03:52.452 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:52.452 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:52.452 Host machine cpu family: x86_64 00:03:52.452 Host machine cpu: x86_64 00:03:52.452 Message: host_machine.system: linux 00:03:52.452 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:52.452 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:52.452 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:52.452 Run-time dependency threads found: YES 00:03:52.452 Has header "setupapi.h" : NO 00:03:52.452 Has header "linux/blkzoned.h" : YES 00:03:52.452 Has header "linux/blkzoned.h" : YES (cached) 00:03:52.452 Has header "libaio.h" : YES 00:03:52.452 Library aio found: YES 00:03:52.452 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:52.452 Run-time dependency liburing found: YES 2.2 00:03:52.452 Dependency libvfn skipped: feature with-libvfn disabled 00:03:52.452 Found CMake: /usr/bin/cmake (3.27.7) 00:03:52.452 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:52.452 Subproject spdk : skipped: feature with-spdk disabled 00:03:52.452 Run-time dependency appleframeworks found: NO (tried framework) 00:03:52.452 Run-time dependency appleframeworks found: NO (tried framework) 00:03:52.452 Library rt found: YES 00:03:52.452 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:52.452 Configuring xnvme_config.h using configuration 00:03:52.452 Configuring xnvme.spec using configuration 00:03:52.452 Run-time dependency bash-completion found: YES 2.11 00:03:52.452 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:52.452 Program cp found: YES (/usr/bin/cp) 00:03:52.452 Build targets in project: 3 00:03:52.452 00:03:52.452 xnvme 0.7.5 00:03:52.452 00:03:52.452 Subprojects 00:03:52.452 spdk : NO Feature 'with-spdk' disabled 00:03:52.452 00:03:52.452 User defined options 00:03:52.452 examples : false 00:03:52.452 tests : false 00:03:52.452 tools : false 00:03:52.452 with-libaio : enabled 00:03:52.452 with-liburing: enabled 00:03:52.452 with-libvfn : disabled 00:03:52.452 with-spdk : disabled 00:03:52.452 00:03:52.452 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:52.709 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:52.709 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:52.709 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:52.709 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:52.709 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:52.709 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:52.709 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:52.709 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:52.709 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:52.709 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:52.967 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:52.967 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:52.967 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:52.967 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:52.967 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:52.967 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:52.967 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:52.967 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:52.967 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:52.967 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:52.967 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:52.967 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:52.967 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:52.967 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:52.967 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:52.967 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:52.967 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:52.967 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:52.967 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:52.967 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:52.967 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:52.967 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:52.967 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:52.967 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:52.967 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:52.967 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:52.967 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:53.224 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:53.224 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:53.224 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:53.224 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:53.224 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:53.224 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:53.224 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:53.224 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:53.224 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:53.224 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:53.224 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:53.224 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:53.224 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:53.224 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:53.224 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:53.224 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:53.224 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:53.224 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:53.224 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:53.224 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:53.224 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:53.224 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:53.224 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:53.224 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:53.224 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:53.224 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:53.224 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:53.224 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:53.224 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:53.224 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:53.224 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:53.481 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:53.481 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:53.481 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:53.481 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:53.481 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:53.481 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:53.738 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:53.738 [75/76] Linking static target lib/libxnvme.a 00:03:53.738 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:53.738 INFO: autodetecting backend as ninja 00:03:53.738 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:53.996 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:26.127 CC lib/log/log.o 00:04:26.127 CC lib/log/log_flags.o 00:04:26.127 CC lib/log/log_deprecated.o 00:04:26.127 CC lib/ut/ut.o 00:04:26.127 CC lib/ut_mock/mock.o 00:04:26.127 LIB libspdk_ut_mock.a 00:04:26.127 LIB libspdk_ut.a 00:04:26.127 SO libspdk_ut_mock.so.6.0 00:04:26.127 LIB libspdk_log.a 00:04:26.127 SO libspdk_ut.so.2.0 00:04:26.127 SO libspdk_log.so.7.1 00:04:26.127 SYMLINK libspdk_ut.so 00:04:26.127 SYMLINK libspdk_ut_mock.so 00:04:26.127 SYMLINK libspdk_log.so 00:04:26.127 CC lib/dma/dma.o 00:04:26.127 CC lib/util/bit_array.o 00:04:26.127 CC lib/util/base64.o 00:04:26.127 CC lib/util/crc32.o 00:04:26.127 CC lib/util/cpuset.o 00:04:26.127 CC lib/util/crc16.o 00:04:26.127 CC lib/util/crc32c.o 00:04:26.127 CC lib/ioat/ioat.o 00:04:26.127 CXX lib/trace_parser/trace.o 00:04:26.127 CC lib/vfio_user/host/vfio_user_pci.o 00:04:26.127 CC lib/util/crc32_ieee.o 00:04:26.127 CC lib/vfio_user/host/vfio_user.o 00:04:26.127 LIB libspdk_dma.a 00:04:26.127 CC lib/util/crc64.o 00:04:26.127 SO libspdk_dma.so.5.0 00:04:26.127 CC lib/util/dif.o 00:04:26.127 SYMLINK libspdk_dma.so 00:04:26.127 CC lib/util/fd.o 00:04:26.127 CC lib/util/file.o 00:04:26.127 CC lib/util/fd_group.o 00:04:26.127 CC lib/util/hexlify.o 00:04:26.127 CC lib/util/iov.o 00:04:26.127 LIB libspdk_ioat.a 00:04:26.127 CC lib/util/math.o 00:04:26.127 SO libspdk_ioat.so.7.0 00:04:26.127 CC lib/util/net.o 00:04:26.127 CC lib/util/pipe.o 00:04:26.127 CC lib/util/strerror_tls.o 00:04:26.127 LIB libspdk_vfio_user.a 00:04:26.127 SYMLINK libspdk_ioat.so 00:04:26.127 CC lib/util/string.o 00:04:26.127 SO libspdk_vfio_user.so.5.0 00:04:26.127 CC lib/util/uuid.o 00:04:26.127 SYMLINK libspdk_vfio_user.so 00:04:26.127 CC lib/util/xor.o 00:04:26.127 CC lib/util/zipf.o 00:04:26.127 CC lib/util/md5.o 00:04:26.127 LIB libspdk_util.a 00:04:26.127 LIB libspdk_trace_parser.a 00:04:26.127 SO libspdk_util.so.10.1 00:04:26.127 SO libspdk_trace_parser.so.6.0 00:04:26.127 SYMLINK libspdk_util.so 00:04:26.127 SYMLINK libspdk_trace_parser.so 00:04:26.127 CC lib/rdma_utils/rdma_utils.o 00:04:26.127 CC lib/conf/conf.o 00:04:26.127 CC lib/idxd/idxd_user.o 00:04:26.127 CC lib/idxd/idxd.o 00:04:26.127 CC lib/env_dpdk/env.o 00:04:26.127 CC lib/env_dpdk/memory.o 00:04:26.127 CC lib/idxd/idxd_kernel.o 00:04:26.127 CC lib/env_dpdk/pci.o 00:04:26.127 CC lib/json/json_parse.o 00:04:26.127 CC lib/vmd/vmd.o 00:04:26.127 CC lib/vmd/led.o 00:04:26.127 CC lib/json/json_util.o 00:04:26.127 LIB libspdk_conf.a 00:04:26.127 SO libspdk_conf.so.6.0 00:04:26.127 CC lib/env_dpdk/init.o 00:04:26.127 LIB libspdk_rdma_utils.a 00:04:26.127 SO libspdk_rdma_utils.so.1.0 00:04:26.127 SYMLINK libspdk_conf.so 00:04:26.127 CC lib/env_dpdk/threads.o 00:04:26.127 CC lib/env_dpdk/pci_ioat.o 00:04:26.127 SYMLINK libspdk_rdma_utils.so 00:04:26.127 CC lib/env_dpdk/pci_virtio.o 00:04:26.127 CC lib/env_dpdk/pci_vmd.o 00:04:26.127 CC lib/json/json_write.o 00:04:26.127 CC lib/env_dpdk/pci_idxd.o 00:04:26.127 CC lib/env_dpdk/pci_event.o 00:04:26.127 CC lib/env_dpdk/sigbus_handler.o 00:04:26.127 CC lib/env_dpdk/pci_dpdk.o 00:04:26.127 CC lib/rdma_provider/common.o 00:04:26.127 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:26.127 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:26.127 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:26.127 LIB libspdk_idxd.a 00:04:26.127 SO libspdk_idxd.so.12.1 00:04:26.127 LIB libspdk_vmd.a 00:04:26.127 LIB libspdk_json.a 00:04:26.127 SO libspdk_vmd.so.6.0 00:04:26.127 SO libspdk_json.so.6.0 00:04:26.127 SYMLINK libspdk_idxd.so 00:04:26.127 SYMLINK libspdk_vmd.so 00:04:26.127 SYMLINK libspdk_json.so 00:04:26.127 LIB libspdk_rdma_provider.a 00:04:26.127 SO libspdk_rdma_provider.so.7.0 00:04:26.127 SYMLINK libspdk_rdma_provider.so 00:04:26.127 CC lib/jsonrpc/jsonrpc_server.o 00:04:26.127 CC lib/jsonrpc/jsonrpc_client.o 00:04:26.127 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:26.127 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:26.385 LIB libspdk_jsonrpc.a 00:04:26.385 SO libspdk_jsonrpc.so.6.0 00:04:26.385 SYMLINK libspdk_jsonrpc.so 00:04:26.385 LIB libspdk_env_dpdk.a 00:04:26.643 SO libspdk_env_dpdk.so.15.1 00:04:26.643 CC lib/rpc/rpc.o 00:04:26.643 SYMLINK libspdk_env_dpdk.so 00:04:26.902 LIB libspdk_rpc.a 00:04:26.902 SO libspdk_rpc.so.6.0 00:04:26.902 SYMLINK libspdk_rpc.so 00:04:27.160 CC lib/keyring/keyring.o 00:04:27.160 CC lib/keyring/keyring_rpc.o 00:04:27.160 CC lib/notify/notify_rpc.o 00:04:27.160 CC lib/notify/notify.o 00:04:27.160 CC lib/trace/trace.o 00:04:27.160 CC lib/trace/trace_flags.o 00:04:27.160 CC lib/trace/trace_rpc.o 00:04:27.160 LIB libspdk_notify.a 00:04:27.418 SO libspdk_notify.so.6.0 00:04:27.418 LIB libspdk_keyring.a 00:04:27.418 SYMLINK libspdk_notify.so 00:04:27.418 SO libspdk_keyring.so.2.0 00:04:27.418 LIB libspdk_trace.a 00:04:27.418 SO libspdk_trace.so.11.0 00:04:27.418 SYMLINK libspdk_keyring.so 00:04:27.418 SYMLINK libspdk_trace.so 00:04:27.676 CC lib/thread/iobuf.o 00:04:27.676 CC lib/thread/thread.o 00:04:27.676 CC lib/sock/sock.o 00:04:27.676 CC lib/sock/sock_rpc.o 00:04:28.242 LIB libspdk_sock.a 00:04:28.242 SO libspdk_sock.so.10.0 00:04:28.242 SYMLINK libspdk_sock.so 00:04:28.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:28.500 CC lib/nvme/nvme_fabric.o 00:04:28.500 CC lib/nvme/nvme_ctrlr.o 00:04:28.500 CC lib/nvme/nvme_ns.o 00:04:28.500 CC lib/nvme/nvme_ns_cmd.o 00:04:28.500 CC lib/nvme/nvme_pcie_common.o 00:04:28.500 CC lib/nvme/nvme_pcie.o 00:04:28.500 CC lib/nvme/nvme.o 00:04:28.500 CC lib/nvme/nvme_qpair.o 00:04:29.065 CC lib/nvme/nvme_quirks.o 00:04:29.065 CC lib/nvme/nvme_transport.o 00:04:29.065 CC lib/nvme/nvme_discovery.o 00:04:29.065 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:29.065 LIB libspdk_thread.a 00:04:29.323 SO libspdk_thread.so.11.0 00:04:29.323 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:29.323 CC lib/nvme/nvme_tcp.o 00:04:29.323 CC lib/nvme/nvme_opal.o 00:04:29.323 SYMLINK libspdk_thread.so 00:04:29.323 CC lib/nvme/nvme_io_msg.o 00:04:29.323 CC lib/nvme/nvme_poll_group.o 00:04:29.323 CC lib/nvme/nvme_zns.o 00:04:29.581 CC lib/nvme/nvme_stubs.o 00:04:29.581 CC lib/nvme/nvme_auth.o 00:04:29.581 CC lib/nvme/nvme_cuse.o 00:04:29.839 CC lib/nvme/nvme_rdma.o 00:04:29.839 CC lib/accel/accel.o 00:04:29.839 CC lib/blob/blobstore.o 00:04:29.839 CC lib/accel/accel_rpc.o 00:04:30.097 CC lib/init/json_config.o 00:04:30.097 CC lib/virtio/virtio.o 00:04:30.097 CC lib/virtio/virtio_vhost_user.o 00:04:30.354 CC lib/init/subsystem.o 00:04:30.354 CC lib/init/subsystem_rpc.o 00:04:30.354 CC lib/init/rpc.o 00:04:30.354 CC lib/blob/request.o 00:04:30.354 CC lib/blob/zeroes.o 00:04:30.354 CC lib/virtio/virtio_vfio_user.o 00:04:30.612 CC lib/blob/blob_bs_dev.o 00:04:30.612 LIB libspdk_init.a 00:04:30.612 SO libspdk_init.so.6.0 00:04:30.612 CC lib/virtio/virtio_pci.o 00:04:30.612 SYMLINK libspdk_init.so 00:04:30.612 CC lib/accel/accel_sw.o 00:04:30.612 CC lib/fsdev/fsdev.o 00:04:30.612 CC lib/fsdev/fsdev_io.o 00:04:30.612 CC lib/fsdev/fsdev_rpc.o 00:04:30.870 LIB libspdk_virtio.a 00:04:30.870 CC lib/event/app.o 00:04:30.870 CC lib/event/reactor.o 00:04:30.870 CC lib/event/log_rpc.o 00:04:30.870 SO libspdk_virtio.so.7.0 00:04:30.870 CC lib/event/app_rpc.o 00:04:30.870 SYMLINK libspdk_virtio.so 00:04:30.870 CC lib/event/scheduler_static.o 00:04:30.870 LIB libspdk_accel.a 00:04:31.128 SO libspdk_accel.so.16.0 00:04:31.128 SYMLINK libspdk_accel.so 00:04:31.128 LIB libspdk_nvme.a 00:04:31.128 LIB libspdk_fsdev.a 00:04:31.128 SO libspdk_fsdev.so.2.0 00:04:31.128 SYMLINK libspdk_fsdev.so 00:04:31.128 SO libspdk_nvme.so.15.0 00:04:31.386 CC lib/bdev/bdev_rpc.o 00:04:31.386 CC lib/bdev/bdev.o 00:04:31.386 CC lib/bdev/bdev_zone.o 00:04:31.386 CC lib/bdev/scsi_nvme.o 00:04:31.386 CC lib/bdev/part.o 00:04:31.386 LIB libspdk_event.a 00:04:31.386 SO libspdk_event.so.14.0 00:04:31.386 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:31.386 SYMLINK libspdk_event.so 00:04:31.386 SYMLINK libspdk_nvme.so 00:04:31.990 LIB libspdk_fuse_dispatcher.a 00:04:31.990 SO libspdk_fuse_dispatcher.so.1.0 00:04:32.248 SYMLINK libspdk_fuse_dispatcher.so 00:04:33.183 LIB libspdk_blob.a 00:04:33.441 SO libspdk_blob.so.12.0 00:04:33.441 SYMLINK libspdk_blob.so 00:04:33.699 CC lib/lvol/lvol.o 00:04:33.699 CC lib/blobfs/tree.o 00:04:33.699 CC lib/blobfs/blobfs.o 00:04:33.699 LIB libspdk_bdev.a 00:04:33.699 SO libspdk_bdev.so.17.0 00:04:33.957 SYMLINK libspdk_bdev.so 00:04:33.958 CC lib/scsi/dev.o 00:04:33.958 CC lib/scsi/lun.o 00:04:33.958 CC lib/scsi/port.o 00:04:33.958 CC lib/scsi/scsi.o 00:04:33.958 CC lib/nvmf/ctrlr.o 00:04:33.958 CC lib/nbd/nbd.o 00:04:33.958 CC lib/ftl/ftl_core.o 00:04:33.958 CC lib/ublk/ublk.o 00:04:34.216 CC lib/ftl/ftl_init.o 00:04:34.216 CC lib/ublk/ublk_rpc.o 00:04:34.216 CC lib/ftl/ftl_layout.o 00:04:34.216 CC lib/scsi/scsi_bdev.o 00:04:34.216 CC lib/scsi/scsi_pr.o 00:04:34.216 CC lib/scsi/scsi_rpc.o 00:04:34.474 LIB libspdk_blobfs.a 00:04:34.474 SO libspdk_blobfs.so.11.0 00:04:34.474 CC lib/scsi/task.o 00:04:34.474 CC lib/nbd/nbd_rpc.o 00:04:34.474 SYMLINK libspdk_blobfs.so 00:04:34.474 CC lib/nvmf/ctrlr_discovery.o 00:04:34.474 CC lib/nvmf/ctrlr_bdev.o 00:04:34.474 CC lib/ftl/ftl_debug.o 00:04:34.474 LIB libspdk_nbd.a 00:04:34.474 CC lib/ftl/ftl_io.o 00:04:34.733 LIB libspdk_lvol.a 00:04:34.733 SO libspdk_nbd.so.7.0 00:04:34.733 CC lib/ftl/ftl_sb.o 00:04:34.733 SO libspdk_lvol.so.11.0 00:04:34.733 LIB libspdk_scsi.a 00:04:34.733 SYMLINK libspdk_nbd.so 00:04:34.733 CC lib/ftl/ftl_l2p.o 00:04:34.733 LIB libspdk_ublk.a 00:04:34.733 SO libspdk_scsi.so.9.0 00:04:34.733 SYMLINK libspdk_lvol.so 00:04:34.733 CC lib/ftl/ftl_l2p_flat.o 00:04:34.733 SO libspdk_ublk.so.3.0 00:04:34.733 SYMLINK libspdk_scsi.so 00:04:34.733 CC lib/nvmf/subsystem.o 00:04:34.733 CC lib/nvmf/nvmf.o 00:04:34.733 SYMLINK libspdk_ublk.so 00:04:34.733 CC lib/ftl/ftl_nv_cache.o 00:04:34.733 CC lib/nvmf/nvmf_rpc.o 00:04:34.733 CC lib/nvmf/transport.o 00:04:34.991 CC lib/nvmf/tcp.o 00:04:34.991 CC lib/ftl/ftl_band.o 00:04:34.991 CC lib/iscsi/conn.o 00:04:35.249 CC lib/iscsi/init_grp.o 00:04:35.249 CC lib/iscsi/iscsi.o 00:04:35.508 CC lib/nvmf/stubs.o 00:04:35.508 CC lib/ftl/ftl_band_ops.o 00:04:35.508 CC lib/ftl/ftl_writer.o 00:04:35.508 CC lib/ftl/ftl_rq.o 00:04:35.767 CC lib/ftl/ftl_reloc.o 00:04:35.767 CC lib/ftl/ftl_l2p_cache.o 00:04:35.767 CC lib/ftl/ftl_p2l.o 00:04:35.767 CC lib/nvmf/mdns_server.o 00:04:35.767 CC lib/ftl/ftl_p2l_log.o 00:04:35.767 CC lib/vhost/vhost.o 00:04:36.025 CC lib/ftl/mngt/ftl_mngt.o 00:04:36.025 CC lib/vhost/vhost_rpc.o 00:04:36.025 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:36.025 CC lib/nvmf/rdma.o 00:04:36.025 CC lib/nvmf/auth.o 00:04:36.283 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:36.283 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:36.283 CC lib/iscsi/param.o 00:04:36.283 CC lib/vhost/vhost_scsi.o 00:04:36.283 CC lib/iscsi/portal_grp.o 00:04:36.283 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:36.541 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:36.541 CC lib/iscsi/tgt_node.o 00:04:36.541 CC lib/iscsi/iscsi_subsystem.o 00:04:36.541 CC lib/iscsi/iscsi_rpc.o 00:04:36.541 CC lib/iscsi/task.o 00:04:36.541 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:36.799 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:36.799 CC lib/vhost/vhost_blk.o 00:04:36.799 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:36.799 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:36.799 CC lib/vhost/rte_vhost_user.o 00:04:36.799 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:36.799 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:36.799 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:37.058 CC lib/ftl/utils/ftl_conf.o 00:04:37.058 CC lib/ftl/utils/ftl_md.o 00:04:37.058 CC lib/ftl/utils/ftl_mempool.o 00:04:37.058 CC lib/ftl/utils/ftl_bitmap.o 00:04:37.058 LIB libspdk_iscsi.a 00:04:37.058 CC lib/ftl/utils/ftl_property.o 00:04:37.058 SO libspdk_iscsi.so.8.0 00:04:37.058 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:37.058 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:37.058 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:37.316 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:37.316 SYMLINK libspdk_iscsi.so 00:04:37.316 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:37.316 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:37.316 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:37.316 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:37.316 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:37.316 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:37.316 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:37.316 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:37.574 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:37.574 CC lib/ftl/base/ftl_base_dev.o 00:04:37.574 CC lib/ftl/base/ftl_base_bdev.o 00:04:37.574 CC lib/ftl/ftl_trace.o 00:04:37.574 LIB libspdk_vhost.a 00:04:37.832 SO libspdk_vhost.so.8.0 00:04:37.832 LIB libspdk_ftl.a 00:04:37.832 SYMLINK libspdk_vhost.so 00:04:37.832 SO libspdk_ftl.so.9.0 00:04:38.091 LIB libspdk_nvmf.a 00:04:38.091 SYMLINK libspdk_ftl.so 00:04:38.091 SO libspdk_nvmf.so.20.0 00:04:38.349 SYMLINK libspdk_nvmf.so 00:04:38.606 CC module/env_dpdk/env_dpdk_rpc.o 00:04:38.606 CC module/accel/error/accel_error.o 00:04:38.606 CC module/sock/posix/posix.o 00:04:38.606 CC module/accel/ioat/accel_ioat.o 00:04:38.606 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:38.606 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:38.606 CC module/blob/bdev/blob_bdev.o 00:04:38.606 CC module/scheduler/gscheduler/gscheduler.o 00:04:38.606 CC module/keyring/file/keyring.o 00:04:38.606 CC module/fsdev/aio/fsdev_aio.o 00:04:38.865 LIB libspdk_env_dpdk_rpc.a 00:04:38.865 SO libspdk_env_dpdk_rpc.so.6.0 00:04:38.865 LIB libspdk_scheduler_gscheduler.a 00:04:38.865 SYMLINK libspdk_env_dpdk_rpc.so 00:04:38.865 SO libspdk_scheduler_gscheduler.so.4.0 00:04:38.865 LIB libspdk_scheduler_dpdk_governor.a 00:04:38.865 CC module/accel/ioat/accel_ioat_rpc.o 00:04:38.865 CC module/keyring/file/keyring_rpc.o 00:04:38.865 LIB libspdk_scheduler_dynamic.a 00:04:38.865 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:38.865 SYMLINK libspdk_scheduler_gscheduler.so 00:04:38.865 SO libspdk_scheduler_dynamic.so.4.0 00:04:38.865 CC module/accel/error/accel_error_rpc.o 00:04:38.865 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:38.865 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:38.865 SYMLINK libspdk_scheduler_dynamic.so 00:04:38.865 CC module/fsdev/aio/linux_aio_mgr.o 00:04:38.865 LIB libspdk_accel_ioat.a 00:04:39.123 CC module/keyring/linux/keyring.o 00:04:39.123 LIB libspdk_blob_bdev.a 00:04:39.123 SO libspdk_accel_ioat.so.6.0 00:04:39.123 LIB libspdk_keyring_file.a 00:04:39.123 SO libspdk_blob_bdev.so.12.0 00:04:39.123 SO libspdk_keyring_file.so.2.0 00:04:39.123 LIB libspdk_accel_error.a 00:04:39.123 SYMLINK libspdk_accel_ioat.so 00:04:39.123 SYMLINK libspdk_blob_bdev.so 00:04:39.123 CC module/keyring/linux/keyring_rpc.o 00:04:39.123 SYMLINK libspdk_keyring_file.so 00:04:39.123 SO libspdk_accel_error.so.2.0 00:04:39.123 CC module/accel/dsa/accel_dsa.o 00:04:39.123 SYMLINK libspdk_accel_error.so 00:04:39.123 CC module/accel/dsa/accel_dsa_rpc.o 00:04:39.123 LIB libspdk_keyring_linux.a 00:04:39.123 SO libspdk_keyring_linux.so.1.0 00:04:39.123 CC module/accel/iaa/accel_iaa.o 00:04:39.382 CC module/accel/iaa/accel_iaa_rpc.o 00:04:39.382 SYMLINK libspdk_keyring_linux.so 00:04:39.382 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.382 CC module/bdev/delay/vbdev_delay.o 00:04:39.382 CC module/bdev/gpt/gpt.o 00:04:39.382 CC module/bdev/error/vbdev_error.o 00:04:39.382 LIB libspdk_accel_dsa.a 00:04:39.382 LIB libspdk_accel_iaa.a 00:04:39.382 SO libspdk_accel_dsa.so.5.0 00:04:39.382 SO libspdk_accel_iaa.so.3.0 00:04:39.382 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.382 LIB libspdk_fsdev_aio.a 00:04:39.382 SYMLINK libspdk_accel_dsa.so 00:04:39.382 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.382 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.382 SYMLINK libspdk_accel_iaa.so 00:04:39.382 LIB libspdk_sock_posix.a 00:04:39.382 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.382 SO libspdk_fsdev_aio.so.1.0 00:04:39.640 SO libspdk_sock_posix.so.6.0 00:04:39.640 CC module/bdev/malloc/bdev_malloc.o 00:04:39.640 SYMLINK libspdk_fsdev_aio.so 00:04:39.640 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:39.640 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.640 SYMLINK libspdk_sock_posix.so 00:04:39.640 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:39.640 LIB libspdk_blobfs_bdev.a 00:04:39.640 CC module/bdev/null/bdev_null.o 00:04:39.640 SO libspdk_blobfs_bdev.so.6.0 00:04:39.640 SYMLINK libspdk_blobfs_bdev.so 00:04:39.640 LIB libspdk_bdev_delay.a 00:04:39.640 LIB libspdk_bdev_error.a 00:04:39.640 SO libspdk_bdev_delay.so.6.0 00:04:39.640 SO libspdk_bdev_error.so.6.0 00:04:39.640 LIB libspdk_bdev_gpt.a 00:04:39.898 CC module/bdev/nvme/bdev_nvme.o 00:04:39.898 SO libspdk_bdev_gpt.so.6.0 00:04:39.898 SYMLINK libspdk_bdev_error.so 00:04:39.898 SYMLINK libspdk_bdev_delay.so 00:04:39.898 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:39.898 CC module/bdev/null/bdev_null_rpc.o 00:04:39.898 CC module/bdev/passthru/vbdev_passthru.o 00:04:39.898 SYMLINK libspdk_bdev_gpt.so 00:04:39.898 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:39.898 CC module/bdev/raid/bdev_raid.o 00:04:39.898 CC module/bdev/raid/bdev_raid_rpc.o 00:04:39.898 LIB libspdk_bdev_malloc.a 00:04:39.898 SO libspdk_bdev_malloc.so.6.0 00:04:39.898 LIB libspdk_bdev_lvol.a 00:04:39.898 LIB libspdk_bdev_null.a 00:04:39.898 SO libspdk_bdev_lvol.so.6.0 00:04:39.898 SYMLINK libspdk_bdev_malloc.so 00:04:39.898 SO libspdk_bdev_null.so.6.0 00:04:40.156 CC module/bdev/nvme/nvme_rpc.o 00:04:40.156 SYMLINK libspdk_bdev_lvol.so 00:04:40.156 LIB libspdk_bdev_passthru.a 00:04:40.156 CC module/bdev/split/vbdev_split.o 00:04:40.156 SYMLINK libspdk_bdev_null.so 00:04:40.156 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.156 SO libspdk_bdev_passthru.so.6.0 00:04:40.156 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.156 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.156 SYMLINK libspdk_bdev_passthru.so 00:04:40.156 CC module/bdev/xnvme/bdev_xnvme.o 00:04:40.156 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:40.156 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.413 CC module/bdev/aio/bdev_aio.o 00:04:40.413 LIB libspdk_bdev_split.a 00:04:40.413 SO libspdk_bdev_split.so.6.0 00:04:40.413 CC module/bdev/raid/raid0.o 00:04:40.413 LIB libspdk_bdev_zone_block.a 00:04:40.413 CC module/bdev/ftl/bdev_ftl.o 00:04:40.413 SO libspdk_bdev_zone_block.so.6.0 00:04:40.413 SYMLINK libspdk_bdev_split.so 00:04:40.413 LIB libspdk_bdev_xnvme.a 00:04:40.413 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.413 SO libspdk_bdev_xnvme.so.3.0 00:04:40.413 SYMLINK libspdk_bdev_zone_block.so 00:04:40.413 CC module/bdev/raid/raid1.o 00:04:40.413 CC module/bdev/raid/concat.o 00:04:40.413 SYMLINK libspdk_bdev_xnvme.so 00:04:40.413 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.671 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.671 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.671 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:40.671 LIB libspdk_bdev_aio.a 00:04:40.671 SO libspdk_bdev_aio.so.6.0 00:04:40.671 LIB libspdk_bdev_ftl.a 00:04:40.671 CC module/bdev/nvme/vbdev_opal.o 00:04:40.671 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.671 SO libspdk_bdev_ftl.so.6.0 00:04:40.671 SYMLINK libspdk_bdev_aio.so 00:04:40.671 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.671 SYMLINK libspdk_bdev_ftl.so 00:04:40.671 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:40.671 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:40.671 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.930 LIB libspdk_bdev_raid.a 00:04:40.930 SO libspdk_bdev_raid.so.6.0 00:04:40.930 LIB libspdk_bdev_iscsi.a 00:04:40.930 SO libspdk_bdev_iscsi.so.6.0 00:04:40.930 SYMLINK libspdk_bdev_raid.so 00:04:41.188 SYMLINK libspdk_bdev_iscsi.so 00:04:41.188 LIB libspdk_bdev_virtio.a 00:04:41.445 SO libspdk_bdev_virtio.so.6.0 00:04:41.445 SYMLINK libspdk_bdev_virtio.so 00:04:42.012 LIB libspdk_bdev_nvme.a 00:04:42.012 SO libspdk_bdev_nvme.so.7.1 00:04:42.269 SYMLINK libspdk_bdev_nvme.so 00:04:42.612 CC module/event/subsystems/fsdev/fsdev.o 00:04:42.612 CC module/event/subsystems/vmd/vmd.o 00:04:42.612 CC module/event/subsystems/iobuf/iobuf.o 00:04:42.612 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.612 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:42.612 CC module/event/subsystems/sock/sock.o 00:04:42.612 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:42.612 CC module/event/subsystems/keyring/keyring.o 00:04:42.612 CC module/event/subsystems/scheduler/scheduler.o 00:04:42.612 LIB libspdk_event_iobuf.a 00:04:42.612 LIB libspdk_event_keyring.a 00:04:42.878 LIB libspdk_event_vmd.a 00:04:42.878 LIB libspdk_event_vhost_blk.a 00:04:42.878 LIB libspdk_event_fsdev.a 00:04:42.878 LIB libspdk_event_sock.a 00:04:42.878 SO libspdk_event_keyring.so.1.0 00:04:42.878 SO libspdk_event_iobuf.so.3.0 00:04:42.878 LIB libspdk_event_scheduler.a 00:04:42.878 SO libspdk_event_vmd.so.6.0 00:04:42.878 SO libspdk_event_fsdev.so.1.0 00:04:42.878 SO libspdk_event_sock.so.5.0 00:04:42.878 SO libspdk_event_vhost_blk.so.3.0 00:04:42.878 SO libspdk_event_scheduler.so.4.0 00:04:42.878 SYMLINK libspdk_event_keyring.so 00:04:42.878 SYMLINK libspdk_event_iobuf.so 00:04:42.878 SYMLINK libspdk_event_fsdev.so 00:04:42.878 SYMLINK libspdk_event_vhost_blk.so 00:04:42.878 SYMLINK libspdk_event_vmd.so 00:04:42.878 SYMLINK libspdk_event_scheduler.so 00:04:42.878 SYMLINK libspdk_event_sock.so 00:04:42.878 CC module/event/subsystems/accel/accel.o 00:04:43.136 LIB libspdk_event_accel.a 00:04:43.136 SO libspdk_event_accel.so.6.0 00:04:43.136 SYMLINK libspdk_event_accel.so 00:04:43.396 CC module/event/subsystems/bdev/bdev.o 00:04:43.656 LIB libspdk_event_bdev.a 00:04:43.656 SO libspdk_event_bdev.so.6.0 00:04:43.656 SYMLINK libspdk_event_bdev.so 00:04:43.916 CC module/event/subsystems/scsi/scsi.o 00:04:43.916 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:43.916 CC module/event/subsystems/ublk/ublk.o 00:04:43.916 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:43.916 CC module/event/subsystems/nbd/nbd.o 00:04:43.916 LIB libspdk_event_scsi.a 00:04:43.916 LIB libspdk_event_nbd.a 00:04:43.916 LIB libspdk_event_ublk.a 00:04:43.916 SO libspdk_event_scsi.so.6.0 00:04:43.916 SO libspdk_event_nbd.so.6.0 00:04:43.916 SO libspdk_event_ublk.so.3.0 00:04:43.916 SYMLINK libspdk_event_scsi.so 00:04:43.916 SYMLINK libspdk_event_ublk.so 00:04:43.916 SYMLINK libspdk_event_nbd.so 00:04:43.916 LIB libspdk_event_nvmf.a 00:04:43.916 SO libspdk_event_nvmf.so.6.0 00:04:44.176 SYMLINK libspdk_event_nvmf.so 00:04:44.176 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.176 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:44.176 LIB libspdk_event_iscsi.a 00:04:44.434 SO libspdk_event_iscsi.so.6.0 00:04:44.434 SYMLINK libspdk_event_iscsi.so 00:04:44.434 LIB libspdk_event_vhost_scsi.a 00:04:44.434 SO libspdk_event_vhost_scsi.so.3.0 00:04:44.434 SYMLINK libspdk_event_vhost_scsi.so 00:04:44.693 SO libspdk.so.6.0 00:04:44.693 SYMLINK libspdk.so 00:04:44.693 TEST_HEADER include/spdk/accel.h 00:04:44.693 TEST_HEADER include/spdk/accel_module.h 00:04:44.693 TEST_HEADER include/spdk/assert.h 00:04:44.693 CXX app/trace/trace.o 00:04:44.693 CC test/rpc_client/rpc_client_test.o 00:04:44.693 TEST_HEADER include/spdk/barrier.h 00:04:44.693 TEST_HEADER include/spdk/base64.h 00:04:44.693 TEST_HEADER include/spdk/bdev.h 00:04:44.693 TEST_HEADER include/spdk/bdev_module.h 00:04:44.693 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.693 TEST_HEADER include/spdk/bit_array.h 00:04:44.693 TEST_HEADER include/spdk/bit_pool.h 00:04:44.693 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.693 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.693 TEST_HEADER include/spdk/blobfs.h 00:04:44.693 TEST_HEADER include/spdk/blob.h 00:04:44.693 TEST_HEADER include/spdk/conf.h 00:04:44.693 TEST_HEADER include/spdk/config.h 00:04:44.693 TEST_HEADER include/spdk/cpuset.h 00:04:44.693 TEST_HEADER include/spdk/crc16.h 00:04:44.693 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.693 TEST_HEADER include/spdk/crc32.h 00:04:44.693 TEST_HEADER include/spdk/crc64.h 00:04:44.693 TEST_HEADER include/spdk/dif.h 00:04:44.693 TEST_HEADER include/spdk/dma.h 00:04:44.693 TEST_HEADER include/spdk/endian.h 00:04:44.693 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.693 TEST_HEADER include/spdk/env.h 00:04:44.693 TEST_HEADER include/spdk/event.h 00:04:44.693 TEST_HEADER include/spdk/fd_group.h 00:04:44.693 TEST_HEADER include/spdk/fd.h 00:04:44.693 TEST_HEADER include/spdk/file.h 00:04:44.693 TEST_HEADER include/spdk/fsdev.h 00:04:44.693 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.693 TEST_HEADER include/spdk/ftl.h 00:04:44.693 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:44.693 CC test/thread/poller_perf/poller_perf.o 00:04:44.693 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.693 TEST_HEADER include/spdk/hexlify.h 00:04:44.693 TEST_HEADER include/spdk/histogram_data.h 00:04:44.693 CC examples/util/zipf/zipf.o 00:04:44.693 TEST_HEADER include/spdk/idxd.h 00:04:44.693 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.693 TEST_HEADER include/spdk/init.h 00:04:44.693 CC examples/ioat/perf/perf.o 00:04:44.693 TEST_HEADER include/spdk/ioat.h 00:04:44.693 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.953 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.953 TEST_HEADER include/spdk/json.h 00:04:44.953 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.953 TEST_HEADER include/spdk/keyring.h 00:04:44.953 TEST_HEADER include/spdk/keyring_module.h 00:04:44.953 TEST_HEADER include/spdk/likely.h 00:04:44.953 TEST_HEADER include/spdk/log.h 00:04:44.953 TEST_HEADER include/spdk/lvol.h 00:04:44.953 TEST_HEADER include/spdk/md5.h 00:04:44.953 TEST_HEADER include/spdk/memory.h 00:04:44.953 TEST_HEADER include/spdk/mmio.h 00:04:44.953 TEST_HEADER include/spdk/nbd.h 00:04:44.953 TEST_HEADER include/spdk/net.h 00:04:44.953 TEST_HEADER include/spdk/notify.h 00:04:44.953 TEST_HEADER include/spdk/nvme.h 00:04:44.953 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.953 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.953 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.953 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.953 CC test/app/bdev_svc/bdev_svc.o 00:04:44.953 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.953 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.953 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.953 TEST_HEADER include/spdk/nvmf.h 00:04:44.953 CC test/dma/test_dma/test_dma.o 00:04:44.953 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.953 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.953 TEST_HEADER include/spdk/opal.h 00:04:44.953 TEST_HEADER include/spdk/opal_spec.h 00:04:44.953 TEST_HEADER include/spdk/pci_ids.h 00:04:44.953 TEST_HEADER include/spdk/pipe.h 00:04:44.953 TEST_HEADER include/spdk/queue.h 00:04:44.953 TEST_HEADER include/spdk/reduce.h 00:04:44.953 TEST_HEADER include/spdk/rpc.h 00:04:44.953 TEST_HEADER include/spdk/scheduler.h 00:04:44.953 TEST_HEADER include/spdk/scsi.h 00:04:44.953 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.953 TEST_HEADER include/spdk/sock.h 00:04:44.953 TEST_HEADER include/spdk/stdinc.h 00:04:44.953 TEST_HEADER include/spdk/string.h 00:04:44.953 CC test/env/mem_callbacks/mem_callbacks.o 00:04:44.953 TEST_HEADER include/spdk/thread.h 00:04:44.953 TEST_HEADER include/spdk/trace.h 00:04:44.953 TEST_HEADER include/spdk/trace_parser.h 00:04:44.953 TEST_HEADER include/spdk/tree.h 00:04:44.953 TEST_HEADER include/spdk/ublk.h 00:04:44.953 TEST_HEADER include/spdk/util.h 00:04:44.953 TEST_HEADER include/spdk/uuid.h 00:04:44.953 TEST_HEADER include/spdk/version.h 00:04:44.953 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.953 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.953 TEST_HEADER include/spdk/vhost.h 00:04:44.953 TEST_HEADER include/spdk/vmd.h 00:04:44.953 TEST_HEADER include/spdk/xor.h 00:04:44.953 TEST_HEADER include/spdk/zipf.h 00:04:44.953 CXX test/cpp_headers/accel.o 00:04:44.953 LINK rpc_client_test 00:04:44.953 LINK interrupt_tgt 00:04:44.953 LINK zipf 00:04:44.953 LINK poller_perf 00:04:44.953 LINK bdev_svc 00:04:44.953 CXX test/cpp_headers/accel_module.o 00:04:44.953 LINK ioat_perf 00:04:44.953 CXX test/cpp_headers/assert.o 00:04:44.953 CXX test/cpp_headers/barrier.o 00:04:44.953 CXX test/cpp_headers/base64.o 00:04:45.214 LINK spdk_trace 00:04:45.214 CC app/trace_record/trace_record.o 00:04:45.214 CXX test/cpp_headers/bdev.o 00:04:45.214 CXX test/cpp_headers/bdev_module.o 00:04:45.214 CC examples/ioat/verify/verify.o 00:04:45.214 CC test/env/vtophys/vtophys.o 00:04:45.214 LINK mem_callbacks 00:04:45.214 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:45.214 CC app/nvmf_tgt/nvmf_main.o 00:04:45.214 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.214 LINK test_dma 00:04:45.471 LINK vtophys 00:04:45.471 CXX test/cpp_headers/bdev_zone.o 00:04:45.471 LINK spdk_trace_record 00:04:45.471 LINK env_dpdk_post_init 00:04:45.471 LINK verify 00:04:45.471 CC test/env/memory/memory_ut.o 00:04:45.471 CC test/env/pci/pci_ut.o 00:04:45.471 LINK nvmf_tgt 00:04:45.471 CXX test/cpp_headers/bit_array.o 00:04:45.471 CC app/iscsi_tgt/iscsi_tgt.o 00:04:45.730 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:45.730 CC app/spdk_tgt/spdk_tgt.o 00:04:45.730 CC test/event/event_perf/event_perf.o 00:04:45.730 CXX test/cpp_headers/bit_pool.o 00:04:45.730 LINK nvme_fuzz 00:04:45.730 CC examples/thread/thread/thread_ex.o 00:04:45.730 LINK iscsi_tgt 00:04:45.730 CC test/nvme/aer/aer.o 00:04:45.730 LINK event_perf 00:04:45.730 CXX test/cpp_headers/blob_bdev.o 00:04:45.730 LINK spdk_tgt 00:04:45.730 LINK pci_ut 00:04:45.990 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.990 LINK thread 00:04:45.990 CC test/event/reactor/reactor.o 00:04:45.990 CC test/accel/dif/dif.o 00:04:45.990 LINK aer 00:04:45.990 CC app/spdk_lspci/spdk_lspci.o 00:04:46.249 LINK reactor 00:04:46.249 CXX test/cpp_headers/blobfs.o 00:04:46.249 CC app/spdk_nvme_perf/perf.o 00:04:46.249 CC test/blobfs/mkfs/mkfs.o 00:04:46.249 LINK spdk_lspci 00:04:46.249 CXX test/cpp_headers/blob.o 00:04:46.249 CC test/nvme/reset/reset.o 00:04:46.249 CC test/event/reactor_perf/reactor_perf.o 00:04:46.249 CXX test/cpp_headers/conf.o 00:04:46.249 LINK mkfs 00:04:46.249 LINK memory_ut 00:04:46.249 CC examples/sock/hello_world/hello_sock.o 00:04:46.507 LINK reactor_perf 00:04:46.507 CXX test/cpp_headers/config.o 00:04:46.507 CXX test/cpp_headers/cpuset.o 00:04:46.507 CC app/spdk_nvme_identify/identify.o 00:04:46.507 LINK reset 00:04:46.507 CC test/event/app_repeat/app_repeat.o 00:04:46.507 LINK hello_sock 00:04:46.768 CXX test/cpp_headers/crc16.o 00:04:46.768 CC test/app/histogram_perf/histogram_perf.o 00:04:46.768 CC test/event/scheduler/scheduler.o 00:04:46.768 LINK dif 00:04:46.768 LINK histogram_perf 00:04:46.768 LINK app_repeat 00:04:46.768 CC test/nvme/sgl/sgl.o 00:04:46.768 CXX test/cpp_headers/crc32.o 00:04:46.768 CXX test/cpp_headers/crc64.o 00:04:47.030 CC examples/vmd/lsvmd/lsvmd.o 00:04:47.030 CC examples/vmd/led/led.o 00:04:47.030 LINK scheduler 00:04:47.030 LINK spdk_nvme_perf 00:04:47.030 CC test/app/jsoncat/jsoncat.o 00:04:47.030 CXX test/cpp_headers/dif.o 00:04:47.030 CC test/app/stub/stub.o 00:04:47.030 LINK led 00:04:47.030 LINK sgl 00:04:47.030 LINK lsvmd 00:04:47.030 LINK jsoncat 00:04:47.030 CXX test/cpp_headers/dma.o 00:04:47.289 CXX test/cpp_headers/endian.o 00:04:47.289 LINK iscsi_fuzz 00:04:47.289 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:47.289 LINK stub 00:04:47.289 LINK spdk_nvme_identify 00:04:47.289 CC test/nvme/e2edp/nvme_dp.o 00:04:47.289 CXX test/cpp_headers/env_dpdk.o 00:04:47.289 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:47.289 CXX test/cpp_headers/env.o 00:04:47.289 CC test/lvol/esnap/esnap.o 00:04:47.289 CXX test/cpp_headers/event.o 00:04:47.289 CC examples/idxd/perf/perf.o 00:04:47.289 CXX test/cpp_headers/fd_group.o 00:04:47.289 CXX test/cpp_headers/fd.o 00:04:47.289 CXX test/cpp_headers/file.o 00:04:47.289 CC app/spdk_nvme_discover/discovery_aer.o 00:04:47.548 CC app/spdk_top/spdk_top.o 00:04:47.548 LINK nvme_dp 00:04:47.548 CXX test/cpp_headers/fsdev.o 00:04:47.548 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:47.548 LINK spdk_nvme_discover 00:04:47.548 CC examples/accel/perf/accel_perf.o 00:04:47.548 LINK idxd_perf 00:04:47.548 CXX test/cpp_headers/fsdev_module.o 00:04:47.548 CC examples/blob/hello_world/hello_blob.o 00:04:47.548 CC test/nvme/overhead/overhead.o 00:04:47.806 LINK vhost_fuzz 00:04:47.806 CC test/nvme/err_injection/err_injection.o 00:04:47.806 CXX test/cpp_headers/ftl.o 00:04:47.806 LINK hello_fsdev 00:04:47.806 CC examples/blob/cli/blobcli.o 00:04:47.806 LINK hello_blob 00:04:47.806 LINK err_injection 00:04:48.064 CXX test/cpp_headers/fuse_dispatcher.o 00:04:48.064 CC test/bdev/bdevio/bdevio.o 00:04:48.064 LINK overhead 00:04:48.064 CC test/nvme/startup/startup.o 00:04:48.064 LINK accel_perf 00:04:48.064 CXX test/cpp_headers/gpt_spec.o 00:04:48.064 CC test/nvme/reserve/reserve.o 00:04:48.064 CC examples/nvme/hello_world/hello_world.o 00:04:48.323 CXX test/cpp_headers/hexlify.o 00:04:48.323 CXX test/cpp_headers/histogram_data.o 00:04:48.323 LINK startup 00:04:48.323 CC examples/nvme/reconnect/reconnect.o 00:04:48.323 LINK spdk_top 00:04:48.323 LINK reserve 00:04:48.323 LINK blobcli 00:04:48.323 CXX test/cpp_headers/idxd.o 00:04:48.323 LINK bdevio 00:04:48.323 LINK hello_world 00:04:48.323 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:48.323 CC examples/nvme/arbitration/arbitration.o 00:04:48.581 CC test/nvme/simple_copy/simple_copy.o 00:04:48.581 CXX test/cpp_headers/idxd_spec.o 00:04:48.581 CC app/vhost/vhost.o 00:04:48.581 CXX test/cpp_headers/init.o 00:04:48.581 CXX test/cpp_headers/ioat.o 00:04:48.581 LINK reconnect 00:04:48.581 CC examples/bdev/hello_world/hello_bdev.o 00:04:48.581 LINK simple_copy 00:04:48.581 CXX test/cpp_headers/ioat_spec.o 00:04:48.581 LINK arbitration 00:04:48.581 LINK vhost 00:04:48.840 CC examples/nvme/hotplug/hotplug.o 00:04:48.840 CC test/nvme/connect_stress/connect_stress.o 00:04:48.840 CC examples/bdev/bdevperf/bdevperf.o 00:04:48.840 LINK nvme_manage 00:04:48.840 CXX test/cpp_headers/iscsi_spec.o 00:04:48.840 LINK hello_bdev 00:04:48.840 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:48.840 CC examples/nvme/abort/abort.o 00:04:48.840 LINK connect_stress 00:04:48.840 LINK hotplug 00:04:49.099 CXX test/cpp_headers/json.o 00:04:49.099 CC app/spdk_dd/spdk_dd.o 00:04:49.099 LINK cmb_copy 00:04:49.099 CC app/fio/nvme/fio_plugin.o 00:04:49.099 CXX test/cpp_headers/jsonrpc.o 00:04:49.099 CC app/fio/bdev/fio_plugin.o 00:04:49.099 CC test/nvme/boot_partition/boot_partition.o 00:04:49.099 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:49.099 CXX test/cpp_headers/keyring.o 00:04:49.099 CXX test/cpp_headers/keyring_module.o 00:04:49.357 LINK abort 00:04:49.357 LINK pmr_persistence 00:04:49.357 LINK boot_partition 00:04:49.357 CXX test/cpp_headers/likely.o 00:04:49.357 LINK spdk_dd 00:04:49.357 CXX test/cpp_headers/log.o 00:04:49.357 CXX test/cpp_headers/lvol.o 00:04:49.357 CXX test/cpp_headers/md5.o 00:04:49.357 CXX test/cpp_headers/memory.o 00:04:49.357 CC test/nvme/compliance/nvme_compliance.o 00:04:49.357 CXX test/cpp_headers/mmio.o 00:04:49.617 CXX test/cpp_headers/nbd.o 00:04:49.617 CXX test/cpp_headers/net.o 00:04:49.617 CXX test/cpp_headers/notify.o 00:04:49.617 CXX test/cpp_headers/nvme.o 00:04:49.617 CXX test/cpp_headers/nvme_intel.o 00:04:49.617 LINK spdk_bdev 00:04:49.617 CXX test/cpp_headers/nvme_ocssd.o 00:04:49.617 LINK spdk_nvme 00:04:49.617 LINK bdevperf 00:04:49.617 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:49.617 CXX test/cpp_headers/nvme_spec.o 00:04:49.617 CXX test/cpp_headers/nvme_zns.o 00:04:49.617 CXX test/cpp_headers/nvmf_cmd.o 00:04:49.617 LINK nvme_compliance 00:04:49.617 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:49.617 CXX test/cpp_headers/nvmf.o 00:04:49.875 CXX test/cpp_headers/nvmf_spec.o 00:04:49.875 CXX test/cpp_headers/nvmf_transport.o 00:04:49.875 CC test/nvme/fused_ordering/fused_ordering.o 00:04:49.875 CXX test/cpp_headers/opal.o 00:04:49.875 CXX test/cpp_headers/opal_spec.o 00:04:49.875 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:49.875 CXX test/cpp_headers/pci_ids.o 00:04:49.875 CC test/nvme/fdp/fdp.o 00:04:49.875 CC test/nvme/cuse/cuse.o 00:04:49.875 CC examples/nvmf/nvmf/nvmf.o 00:04:49.875 CXX test/cpp_headers/pipe.o 00:04:49.875 CXX test/cpp_headers/queue.o 00:04:50.134 CXX test/cpp_headers/reduce.o 00:04:50.134 LINK fused_ordering 00:04:50.134 CXX test/cpp_headers/rpc.o 00:04:50.134 CXX test/cpp_headers/scheduler.o 00:04:50.134 LINK doorbell_aers 00:04:50.134 CXX test/cpp_headers/scsi.o 00:04:50.134 CXX test/cpp_headers/scsi_spec.o 00:04:50.134 CXX test/cpp_headers/sock.o 00:04:50.134 CXX test/cpp_headers/stdinc.o 00:04:50.134 CXX test/cpp_headers/string.o 00:04:50.134 LINK nvmf 00:04:50.134 CXX test/cpp_headers/thread.o 00:04:50.134 CXX test/cpp_headers/trace.o 00:04:50.134 CXX test/cpp_headers/trace_parser.o 00:04:50.393 LINK fdp 00:04:50.393 CXX test/cpp_headers/tree.o 00:04:50.393 CXX test/cpp_headers/ublk.o 00:04:50.393 CXX test/cpp_headers/util.o 00:04:50.393 CXX test/cpp_headers/uuid.o 00:04:50.393 CXX test/cpp_headers/version.o 00:04:50.393 CXX test/cpp_headers/vfio_user_pci.o 00:04:50.393 CXX test/cpp_headers/vfio_user_spec.o 00:04:50.393 CXX test/cpp_headers/vhost.o 00:04:50.393 CXX test/cpp_headers/vmd.o 00:04:50.393 CXX test/cpp_headers/xor.o 00:04:50.394 CXX test/cpp_headers/zipf.o 00:04:51.328 LINK cuse 00:04:52.271 LINK esnap 00:04:52.533 ************************************ 00:04:52.533 END TEST make 00:04:52.533 ************************************ 00:04:52.533 00:04:52.533 real 1m2.329s 00:04:52.533 user 5m14.712s 00:04:52.533 sys 0m54.085s 00:04:52.533 22:49:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:52.533 22:49:31 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.533 22:49:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.533 22:49:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.533 22:49:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.533 22:49:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.533 22:49:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.533 22:49:31 -- pm/common@44 -- $ pid=5808 00:04:52.533 22:49:31 -- pm/common@50 -- $ kill -TERM 5808 00:04:52.533 22:49:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.533 22:49:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.533 22:49:31 -- pm/common@44 -- $ pid=5809 00:04:52.533 22:49:31 -- pm/common@50 -- $ kill -TERM 5809 00:04:52.533 22:49:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:52.533 22:49:31 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:52.796 22:49:31 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.796 22:49:31 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.796 22:49:31 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.796 22:49:31 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.796 22:49:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.796 22:49:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.796 22:49:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.796 22:49:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.796 22:49:31 -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.796 22:49:31 -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.796 22:49:31 -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.796 22:49:31 -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.796 22:49:31 -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.796 22:49:31 -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.796 22:49:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.796 22:49:31 -- scripts/common.sh@344 -- # case "$op" in 00:04:52.796 22:49:31 -- scripts/common.sh@345 -- # : 1 00:04:52.796 22:49:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.796 22:49:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.796 22:49:31 -- scripts/common.sh@365 -- # decimal 1 00:04:52.796 22:49:31 -- scripts/common.sh@353 -- # local d=1 00:04:52.796 22:49:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.796 22:49:31 -- scripts/common.sh@355 -- # echo 1 00:04:52.796 22:49:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.796 22:49:31 -- scripts/common.sh@366 -- # decimal 2 00:04:52.796 22:49:31 -- scripts/common.sh@353 -- # local d=2 00:04:52.796 22:49:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.796 22:49:31 -- scripts/common.sh@355 -- # echo 2 00:04:52.796 22:49:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.796 22:49:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.796 22:49:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.796 22:49:31 -- scripts/common.sh@368 -- # return 0 00:04:52.796 22:49:31 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.796 22:49:31 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.796 --rc genhtml_branch_coverage=1 00:04:52.796 --rc genhtml_function_coverage=1 00:04:52.796 --rc genhtml_legend=1 00:04:52.796 --rc geninfo_all_blocks=1 00:04:52.796 --rc geninfo_unexecuted_blocks=1 00:04:52.796 00:04:52.796 ' 00:04:52.796 22:49:31 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.796 --rc genhtml_branch_coverage=1 00:04:52.796 --rc genhtml_function_coverage=1 00:04:52.796 --rc genhtml_legend=1 00:04:52.796 --rc geninfo_all_blocks=1 00:04:52.796 --rc geninfo_unexecuted_blocks=1 00:04:52.796 00:04:52.796 ' 00:04:52.796 22:49:31 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.796 --rc genhtml_branch_coverage=1 00:04:52.796 --rc genhtml_function_coverage=1 00:04:52.796 --rc genhtml_legend=1 00:04:52.796 --rc geninfo_all_blocks=1 00:04:52.796 --rc geninfo_unexecuted_blocks=1 00:04:52.796 00:04:52.796 ' 00:04:52.796 22:49:31 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.796 --rc genhtml_branch_coverage=1 00:04:52.796 --rc genhtml_function_coverage=1 00:04:52.796 --rc genhtml_legend=1 00:04:52.796 --rc geninfo_all_blocks=1 00:04:52.796 --rc geninfo_unexecuted_blocks=1 00:04:52.796 00:04:52.796 ' 00:04:52.796 22:49:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.796 22:49:31 -- nvmf/common.sh@7 -- # uname -s 00:04:52.796 22:49:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.796 22:49:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.796 22:49:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.796 22:49:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.796 22:49:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.797 22:49:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.797 22:49:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.797 22:49:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.797 22:49:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.797 22:49:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.797 22:49:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e86f0635-77ac-4fdf-8e71-de7b7fded113 00:04:52.797 22:49:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=e86f0635-77ac-4fdf-8e71-de7b7fded113 00:04:52.797 22:49:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.797 22:49:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.797 22:49:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.797 22:49:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.797 22:49:31 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.797 22:49:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.797 22:49:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.797 22:49:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.797 22:49:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.797 22:49:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.797 22:49:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.797 22:49:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.797 22:49:31 -- paths/export.sh@5 -- # export PATH 00:04:52.797 22:49:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.797 22:49:31 -- nvmf/common.sh@51 -- # : 0 00:04:52.797 22:49:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.797 22:49:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.797 22:49:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.797 22:49:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.797 22:49:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.797 22:49:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.797 22:49:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.797 22:49:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.797 22:49:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.797 22:49:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.797 22:49:31 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.797 22:49:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.797 22:49:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.797 22:49:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.797 22:49:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.797 22:49:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.797 22:49:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.797 22:49:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.797 22:49:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.797 22:49:31 -- spdk/autotest.sh@48 -- # udevadm_pid=68055 00:04:52.797 22:49:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.797 22:49:31 -- pm/common@17 -- # local monitor 00:04:52.797 22:49:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.797 22:49:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.797 22:49:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.797 22:49:31 -- pm/common@25 -- # sleep 1 00:04:52.797 22:49:31 -- pm/common@21 -- # date +%s 00:04:52.797 22:49:31 -- pm/common@21 -- # date +%s 00:04:52.797 22:49:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732661371 00:04:52.797 22:49:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732661371 00:04:52.797 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732661371_collect-cpu-load.pm.log 00:04:52.797 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732661371_collect-vmstat.pm.log 00:04:53.740 22:49:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:53.740 22:49:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:53.740 22:49:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.740 22:49:32 -- common/autotest_common.sh@10 -- # set +x 00:04:53.740 22:49:32 -- spdk/autotest.sh@59 -- # create_test_list 00:04:53.740 22:49:32 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:53.740 22:49:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.002 22:49:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:54.002 22:49:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:54.002 22:49:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:54.002 22:49:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:54.002 22:49:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:54.002 22:49:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:54.002 22:49:32 -- common/autotest_common.sh@1457 -- # uname 00:04:54.002 22:49:32 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:54.002 22:49:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:54.002 22:49:32 -- common/autotest_common.sh@1477 -- # uname 00:04:54.002 22:49:32 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:54.002 22:49:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:54.002 22:49:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:54.002 lcov: LCOV version 1.15 00:04:54.002 22:49:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:08.986 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:08.986 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:23.900 22:50:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:23.900 22:50:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.900 22:50:02 -- common/autotest_common.sh@10 -- # set +x 00:05:23.900 22:50:02 -- spdk/autotest.sh@78 -- # rm -f 00:05:23.900 22:50:02 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.736 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:24.736 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:24.736 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:24.736 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:24.736 22:50:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:24.736 22:50:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:24.736 22:50:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:24.736 22:50:03 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:24.736 22:50:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.736 22:50:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.736 22:50:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.736 22:50:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:24.736 22:50:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:24.736 22:50:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.736 22:50:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:24.736 22:50:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:24.736 22:50:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.736 22:50:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.736 22:50:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.736 22:50:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:24.736 22:50:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:24.736 22:50:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.736 22:50:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:24.737 22:50:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.737 22:50:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.737 22:50:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:24.737 22:50:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:24.737 22:50:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:24.737 No valid GPT data, bailing 00:05:24.737 22:50:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:24.737 22:50:03 -- scripts/common.sh@394 -- # pt= 00:05:24.737 22:50:03 -- scripts/common.sh@395 -- # return 1 00:05:24.737 22:50:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:24.737 1+0 records in 00:05:24.737 1+0 records out 00:05:24.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298941 s, 35.1 MB/s 00:05:24.737 22:50:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.737 22:50:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.737 22:50:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:24.737 22:50:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:24.737 22:50:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:24.737 No valid GPT data, bailing 00:05:24.737 22:50:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:24.999 22:50:03 -- scripts/common.sh@394 -- # pt= 00:05:24.999 22:50:03 -- scripts/common.sh@395 -- # return 1 00:05:24.999 22:50:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:24.999 1+0 records in 00:05:24.999 1+0 records out 00:05:24.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494133 s, 212 MB/s 00:05:24.999 22:50:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.999 22:50:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.999 22:50:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:24.999 22:50:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:24.999 22:50:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:24.999 No valid GPT data, bailing 00:05:24.999 22:50:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:24.999 22:50:03 -- scripts/common.sh@394 -- # pt= 00:05:24.999 22:50:03 -- scripts/common.sh@395 -- # return 1 00:05:24.999 22:50:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:24.999 1+0 records in 00:05:24.999 1+0 records out 00:05:24.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613012 s, 171 MB/s 00:05:24.999 22:50:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.999 22:50:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.999 22:50:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:24.999 22:50:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:24.999 22:50:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:24.999 No valid GPT data, bailing 00:05:24.999 22:50:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:24.999 22:50:04 -- scripts/common.sh@394 -- # pt= 00:05:24.999 22:50:04 -- scripts/common.sh@395 -- # return 1 00:05:24.999 22:50:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:24.999 1+0 records in 00:05:24.999 1+0 records out 00:05:24.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694067 s, 151 MB/s 00:05:24.999 22:50:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.999 22:50:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.999 22:50:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:24.999 22:50:04 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:24.999 22:50:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:24.999 No valid GPT data, bailing 00:05:24.999 22:50:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:25.260 22:50:04 -- scripts/common.sh@394 -- # pt= 00:05:25.260 22:50:04 -- scripts/common.sh@395 -- # return 1 00:05:25.260 22:50:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:25.260 1+0 records in 00:05:25.260 1+0 records out 00:05:25.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578616 s, 181 MB/s 00:05:25.260 22:50:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.260 22:50:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.260 22:50:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:25.260 22:50:04 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:25.260 22:50:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:25.260 No valid GPT data, bailing 00:05:25.260 22:50:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:25.260 22:50:04 -- scripts/common.sh@394 -- # pt= 00:05:25.260 22:50:04 -- scripts/common.sh@395 -- # return 1 00:05:25.260 22:50:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:25.260 1+0 records in 00:05:25.260 1+0 records out 00:05:25.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496684 s, 211 MB/s 00:05:25.260 22:50:04 -- spdk/autotest.sh@105 -- # sync 00:05:25.260 22:50:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:25.260 22:50:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:25.260 22:50:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:27.175 22:50:06 -- spdk/autotest.sh@111 -- # uname -s 00:05:27.175 22:50:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:27.176 22:50:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:27.176 22:50:06 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:27.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.322 Hugepages 00:05:28.322 node hugesize free / total 00:05:28.322 node0 1048576kB 0 / 0 00:05:28.322 node0 2048kB 0 / 0 00:05:28.322 00:05:28.322 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:28.322 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:28.322 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:28.322 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:28.584 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:28.584 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:28.584 22:50:07 -- spdk/autotest.sh@117 -- # uname -s 00:05:28.584 22:50:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:28.584 22:50:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:28.584 22:50:07 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.730 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.730 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.730 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.730 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.730 22:50:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:31.112 22:50:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:31.112 22:50:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:31.112 22:50:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:31.112 22:50:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:31.112 22:50:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.112 22:50:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:31.112 22:50:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.112 22:50:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:31.112 22:50:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:31.112 22:50:09 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:31.112 22:50:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:31.112 22:50:09 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.373 Waiting for block devices as requested 00:05:31.373 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:31.635 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:31.635 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:31.635 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.959 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:36.959 22:50:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:36.959 22:50:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:36.959 22:50:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:36.959 22:50:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:36.959 22:50:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:36.959 22:50:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:36.959 22:50:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:36.959 22:50:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:36.959 22:50:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:36.959 22:50:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:36.959 22:50:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:36.959 22:50:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:36.959 22:50:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:36.959 22:50:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:36.959 22:50:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:36.959 22:50:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:36.959 22:50:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:36.959 22:50:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:36.959 22:50:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:36.959 22:50:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:36.959 22:50:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:36.959 22:50:15 -- common/autotest_common.sh@1543 -- # continue 00:05:36.959 22:50:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:36.959 22:50:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:36.959 22:50:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:36.959 22:50:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:36.959 22:50:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:36.959 22:50:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:36.960 22:50:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:36.960 22:50:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:36.960 22:50:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:36.960 22:50:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:36.960 22:50:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:36.960 22:50:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1543 -- # continue 00:05:36.960 22:50:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:36.960 22:50:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:36.960 22:50:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:36.960 22:50:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:36.960 22:50:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:36.960 22:50:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:36.960 22:50:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1543 -- # continue 00:05:36.960 22:50:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:36.960 22:50:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:36.960 22:50:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:36.960 22:50:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:36.960 22:50:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:36.960 22:50:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:36.960 22:50:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:36.960 22:50:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:36.960 22:50:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:36.960 22:50:15 -- common/autotest_common.sh@1543 -- # continue 00:05:36.960 22:50:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:36.960 22:50:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.960 22:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:36.960 22:50:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:36.960 22:50:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.960 22:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:36.960 22:50:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.105 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.105 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.105 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.105 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.366 22:50:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:38.366 22:50:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.366 22:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.366 22:50:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:38.366 22:50:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:38.366 22:50:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.366 22:50:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:38.366 22:50:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:38.366 22:50:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:38.366 22:50:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:38.366 22:50:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:38.366 22:50:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.366 22:50:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.366 22:50:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.366 22:50:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.366 22:50:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.366 22:50:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:38.366 22:50:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:38.367 22:50:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.367 22:50:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.367 22:50:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.367 22:50:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.367 22:50:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.367 22:50:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.367 22:50:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:38.367 22:50:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.367 22:50:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.367 22:50:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:38.367 22:50:17 -- common/autotest_common.sh@1572 -- # return 0 00:05:38.367 22:50:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:38.367 22:50:17 -- common/autotest_common.sh@1580 -- # return 0 00:05:38.367 22:50:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:38.367 22:50:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:38.367 22:50:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.367 22:50:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.367 22:50:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:38.367 22:50:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.367 22:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.367 22:50:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:38.367 22:50:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.367 22:50:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.367 22:50:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.367 22:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.367 ************************************ 00:05:38.367 START TEST env 00:05:38.367 ************************************ 00:05:38.367 22:50:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.628 * Looking for test storage... 00:05:38.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.628 22:50:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.628 22:50:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.628 22:50:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.628 22:50:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.628 22:50:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.628 22:50:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.628 22:50:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.628 22:50:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.628 22:50:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.628 22:50:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.628 22:50:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.628 22:50:17 env -- scripts/common.sh@344 -- # case "$op" in 00:05:38.628 22:50:17 env -- scripts/common.sh@345 -- # : 1 00:05:38.628 22:50:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.628 22:50:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.628 22:50:17 env -- scripts/common.sh@365 -- # decimal 1 00:05:38.628 22:50:17 env -- scripts/common.sh@353 -- # local d=1 00:05:38.628 22:50:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.628 22:50:17 env -- scripts/common.sh@355 -- # echo 1 00:05:38.628 22:50:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.628 22:50:17 env -- scripts/common.sh@366 -- # decimal 2 00:05:38.628 22:50:17 env -- scripts/common.sh@353 -- # local d=2 00:05:38.628 22:50:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.628 22:50:17 env -- scripts/common.sh@355 -- # echo 2 00:05:38.628 22:50:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.628 22:50:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.628 22:50:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.628 22:50:17 env -- scripts/common.sh@368 -- # return 0 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.628 --rc genhtml_branch_coverage=1 00:05:38.628 --rc genhtml_function_coverage=1 00:05:38.628 --rc genhtml_legend=1 00:05:38.628 --rc geninfo_all_blocks=1 00:05:38.628 --rc geninfo_unexecuted_blocks=1 00:05:38.628 00:05:38.628 ' 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.628 --rc genhtml_branch_coverage=1 00:05:38.628 --rc genhtml_function_coverage=1 00:05:38.628 --rc genhtml_legend=1 00:05:38.628 --rc geninfo_all_blocks=1 00:05:38.628 --rc geninfo_unexecuted_blocks=1 00:05:38.628 00:05:38.628 ' 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.628 --rc genhtml_branch_coverage=1 00:05:38.628 --rc genhtml_function_coverage=1 00:05:38.628 --rc genhtml_legend=1 00:05:38.628 --rc geninfo_all_blocks=1 00:05:38.628 --rc geninfo_unexecuted_blocks=1 00:05:38.628 00:05:38.628 ' 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.628 --rc genhtml_branch_coverage=1 00:05:38.628 --rc genhtml_function_coverage=1 00:05:38.628 --rc genhtml_legend=1 00:05:38.628 --rc geninfo_all_blocks=1 00:05:38.628 --rc geninfo_unexecuted_blocks=1 00:05:38.628 00:05:38.628 ' 00:05:38.628 22:50:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.628 22:50:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.628 22:50:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.628 ************************************ 00:05:38.628 START TEST env_memory 00:05:38.628 ************************************ 00:05:38.628 22:50:17 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.628 00:05:38.628 00:05:38.628 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.628 http://cunit.sourceforge.net/ 00:05:38.628 00:05:38.628 00:05:38.628 Suite: memory 00:05:38.628 Test: alloc and free memory map ...[2024-11-26 22:50:17.664586] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.628 passed 00:05:38.628 Test: mem map translation ...[2024-11-26 22:50:17.703428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.628 [2024-11-26 22:50:17.703481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.628 [2024-11-26 22:50:17.703542] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.629 [2024-11-26 22:50:17.703557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.889 passed 00:05:38.889 Test: mem map registration ...[2024-11-26 22:50:17.771656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:38.889 [2024-11-26 22:50:17.771705] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:38.889 passed 00:05:38.889 Test: mem map adjacent registrations ...passed 00:05:38.889 00:05:38.889 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.889 suites 1 1 n/a 0 0 00:05:38.889 tests 4 4 4 0 0 00:05:38.889 asserts 152 152 152 0 n/a 00:05:38.889 00:05:38.890 Elapsed time = 0.233 seconds 00:05:38.890 ************************************ 00:05:38.890 END TEST env_memory 00:05:38.890 ************************************ 00:05:38.890 00:05:38.890 real 0m0.273s 00:05:38.890 user 0m0.246s 00:05:38.890 sys 0m0.018s 00:05:38.890 22:50:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.890 22:50:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:38.890 22:50:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.890 22:50:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.890 22:50:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.890 22:50:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.890 ************************************ 00:05:38.890 START TEST env_vtophys 00:05:38.890 ************************************ 00:05:38.890 22:50:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.890 EAL: lib.eal log level changed from notice to debug 00:05:38.890 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 1 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 2 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 3 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 4 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 5 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 6 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 7 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 8 as core 0 on socket 0 00:05:38.890 EAL: Detected lcore 9 as core 0 on socket 0 00:05:38.890 EAL: Maximum logical cores by configuration: 128 00:05:38.890 EAL: Detected CPU lcores: 10 00:05:38.890 EAL: Detected NUMA nodes: 1 00:05:38.890 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:38.890 EAL: Detected shared linkage of DPDK 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:38.890 EAL: Registered [vdev] bus. 00:05:38.890 EAL: bus.vdev log level changed from disabled to notice 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:38.890 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:38.890 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:05:38.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:05:38.890 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.890 EAL: No shared files mode enabled, IPC is disabled 00:05:38.890 EAL: Selected IOVA mode 'PA' 00:05:38.890 EAL: Probing VFIO support... 00:05:38.890 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.890 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:38.890 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.890 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:38.890 EAL: Setting up physically contiguous memory... 00:05:38.890 EAL: Setting maximum number of open files to 524288 00:05:38.890 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:38.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:38.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.890 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:38.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.890 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:38.890 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:38.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.890 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:38.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.890 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:38.890 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:38.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.890 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:38.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.890 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:38.890 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:38.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.890 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.151 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.151 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.151 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.151 EAL: Hugepages will be freed exactly as allocated. 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: TSC frequency is ~2600000 KHz 00:05:39.151 EAL: Main lcore 0 is ready (tid=7fa0bcfb7a40;cpuset=[0]) 00:05:39.151 EAL: Trying to obtain current memory policy. 00:05:39.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.151 EAL: Restoring previous memory policy: 0 00:05:39.151 EAL: request: mp_malloc_sync 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: Heap on socket 0 was expanded by 2MB 00:05:39.151 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.151 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:39.151 00:05:39.151 00:05:39.151 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.151 http://cunit.sourceforge.net/ 00:05:39.151 00:05:39.151 00:05:39.151 Suite: components_suite 00:05:39.412 Test: vtophys_malloc_test ...passed 00:05:39.412 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.412 EAL: Restoring previous memory policy: 4 00:05:39.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.412 EAL: request: mp_malloc_sync 00:05:39.412 EAL: No shared files mode enabled, IPC is disabled 00:05:39.412 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.412 EAL: request: mp_malloc_sync 00:05:39.412 EAL: No shared files mode enabled, IPC is disabled 00:05:39.412 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.412 EAL: Trying to obtain current memory policy. 00:05:39.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.412 EAL: Restoring previous memory policy: 4 00:05:39.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.412 EAL: request: mp_malloc_sync 00:05:39.412 EAL: No shared files mode enabled, IPC is disabled 00:05:39.412 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.412 EAL: request: mp_malloc_sync 00:05:39.412 EAL: No shared files mode enabled, IPC is disabled 00:05:39.412 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.412 EAL: Trying to obtain current memory policy. 00:05:39.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.412 EAL: Restoring previous memory policy: 4 00:05:39.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.412 EAL: request: mp_malloc_sync 00:05:39.413 EAL: No shared files mode enabled, IPC is disabled 00:05:39.413 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.413 EAL: request: mp_malloc_sync 00:05:39.413 EAL: No shared files mode enabled, IPC is disabled 00:05:39.413 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.413 EAL: Trying to obtain current memory policy. 00:05:39.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.687 EAL: Restoring previous memory policy: 4 00:05:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.687 EAL: request: mp_malloc_sync 00:05:39.687 EAL: No shared files mode enabled, IPC is disabled 00:05:39.687 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.687 EAL: request: mp_malloc_sync 00:05:39.687 EAL: No shared files mode enabled, IPC is disabled 00:05:39.687 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.687 EAL: Trying to obtain current memory policy. 00:05:39.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.687 EAL: Restoring previous memory policy: 4 00:05:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.687 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.688 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.688 EAL: Trying to obtain current memory policy. 00:05:39.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.688 EAL: Restoring previous memory policy: 4 00:05:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.688 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was expanded by 66MB 00:05:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.688 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was shrunk by 66MB 00:05:39.688 EAL: Trying to obtain current memory policy. 00:05:39.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.688 EAL: Restoring previous memory policy: 4 00:05:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.688 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was expanded by 130MB 00:05:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.688 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was shrunk by 130MB 00:05:39.688 EAL: Trying to obtain current memory policy. 00:05:39.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.688 EAL: Restoring previous memory policy: 4 00:05:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.688 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.688 EAL: request: mp_malloc_sync 00:05:39.688 EAL: No shared files mode enabled, IPC is disabled 00:05:39.688 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.688 EAL: Trying to obtain current memory policy. 00:05:39.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.949 EAL: Restoring previous memory policy: 4 00:05:39.949 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.949 EAL: request: mp_malloc_sync 00:05:39.949 EAL: No shared files mode enabled, IPC is disabled 00:05:39.949 EAL: Heap on socket 0 was expanded by 514MB 00:05:39.949 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.949 EAL: request: mp_malloc_sync 00:05:39.949 EAL: No shared files mode enabled, IPC is disabled 00:05:39.949 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.949 EAL: Trying to obtain current memory policy. 00:05:39.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.949 EAL: Restoring previous memory policy: 4 00:05:39.949 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.949 EAL: request: mp_malloc_sync 00:05:39.949 EAL: No shared files mode enabled, IPC is disabled 00:05:39.949 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.225 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.225 passed 00:05:40.225 00:05:40.225 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.225 suites 1 1 n/a 0 0 00:05:40.225 tests 2 2 2 0 0 00:05:40.225 asserts 5533 5533 5533 0 n/a 00:05:40.225 00:05:40.225 Elapsed time = 1.102 seconds 00:05:40.226 EAL: request: mp_malloc_sync 00:05:40.226 EAL: No shared files mode enabled, IPC is disabled 00:05:40.226 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.226 EAL: request: mp_malloc_sync 00:05:40.226 EAL: No shared files mode enabled, IPC is disabled 00:05:40.226 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.226 EAL: No shared files mode enabled, IPC is disabled 00:05:40.226 EAL: No shared files mode enabled, IPC is disabled 00:05:40.226 EAL: No shared files mode enabled, IPC is disabled 00:05:40.226 00:05:40.226 real 0m1.359s 00:05:40.226 user 0m0.531s 00:05:40.226 sys 0m0.694s 00:05:40.226 22:50:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.226 ************************************ 00:05:40.226 END TEST env_vtophys 00:05:40.226 ************************************ 00:05:40.226 22:50:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.493 22:50:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.493 22:50:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.493 22:50:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.493 22:50:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.493 ************************************ 00:05:40.493 START TEST env_pci 00:05:40.493 ************************************ 00:05:40.493 22:50:19 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.493 00:05:40.493 00:05:40.493 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.493 http://cunit.sourceforge.net/ 00:05:40.493 00:05:40.493 00:05:40.493 Suite: pci 00:05:40.493 Test: pci_hook ...[2024-11-26 22:50:19.379819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70822 has claimed it 00:05:40.493 passed 00:05:40.493 00:05:40.493 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.493 suites 1 1 n/a 0 0 00:05:40.493 tests 1 1 1 0 0 00:05:40.493 asserts 25 25 25 0 n/a 00:05:40.493 00:05:40.493 Elapsed time = 0.006 secondsEAL: Cannot find device (10000:00:01.0) 00:05:40.493 EAL: Failed to attach device on primary process 00:05:40.493 00:05:40.493 00:05:40.493 real 0m0.075s 00:05:40.493 user 0m0.034s 00:05:40.493 sys 0m0.040s 00:05:40.493 22:50:19 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.493 ************************************ 00:05:40.493 END TEST env_pci 00:05:40.493 ************************************ 00:05:40.493 22:50:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.493 22:50:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.493 22:50:19 env -- env/env.sh@15 -- # uname 00:05:40.493 22:50:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.493 22:50:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.493 22:50:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.493 22:50:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:40.493 22:50:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.493 22:50:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.493 ************************************ 00:05:40.493 START TEST env_dpdk_post_init 00:05:40.493 ************************************ 00:05:40.493 22:50:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.493 EAL: Detected CPU lcores: 10 00:05:40.493 EAL: Detected NUMA nodes: 1 00:05:40.493 EAL: Detected shared linkage of DPDK 00:05:40.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.493 EAL: Selected IOVA mode 'PA' 00:05:40.755 Starting DPDK initialization... 00:05:40.755 Starting SPDK post initialization... 00:05:40.755 SPDK NVMe probe 00:05:40.755 Attaching to 0000:00:10.0 00:05:40.755 Attaching to 0000:00:11.0 00:05:40.755 Attaching to 0000:00:12.0 00:05:40.755 Attaching to 0000:00:13.0 00:05:40.755 Attached to 0000:00:10.0 00:05:40.755 Attached to 0000:00:11.0 00:05:40.755 Attached to 0000:00:13.0 00:05:40.755 Attached to 0000:00:12.0 00:05:40.755 Cleaning up... 00:05:40.755 00:05:40.755 real 0m0.238s 00:05:40.755 user 0m0.067s 00:05:40.755 sys 0m0.074s 00:05:40.755 22:50:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.755 22:50:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.755 ************************************ 00:05:40.755 END TEST env_dpdk_post_init 00:05:40.755 ************************************ 00:05:40.755 22:50:19 env -- env/env.sh@26 -- # uname 00:05:40.755 22:50:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.755 22:50:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.755 22:50:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.755 22:50:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.755 22:50:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.755 ************************************ 00:05:40.755 START TEST env_mem_callbacks 00:05:40.755 ************************************ 00:05:40.755 22:50:19 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.755 EAL: Detected CPU lcores: 10 00:05:40.755 EAL: Detected NUMA nodes: 1 00:05:40.755 EAL: Detected shared linkage of DPDK 00:05:40.755 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.755 EAL: Selected IOVA mode 'PA' 00:05:41.016 00:05:41.016 00:05:41.016 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.016 http://cunit.sourceforge.net/ 00:05:41.016 00:05:41.016 00:05:41.016 Suite: memory 00:05:41.016 Test: test ... 00:05:41.016 register 0x200000200000 2097152 00:05:41.016 malloc 3145728 00:05:41.017 register 0x200000400000 4194304 00:05:41.017 buf 0x200000500000 len 3145728 PASSED 00:05:41.017 malloc 64 00:05:41.017 buf 0x2000004fff40 len 64 PASSED 00:05:41.017 malloc 4194304 00:05:41.017 register 0x200000800000 6291456 00:05:41.017 buf 0x200000a00000 len 4194304 PASSED 00:05:41.017 free 0x200000500000 3145728 00:05:41.017 free 0x2000004fff40 64 00:05:41.017 unregister 0x200000400000 4194304 PASSED 00:05:41.017 free 0x200000a00000 4194304 00:05:41.017 unregister 0x200000800000 6291456 PASSED 00:05:41.017 malloc 8388608 00:05:41.017 register 0x200000400000 10485760 00:05:41.017 buf 0x200000600000 len 8388608 PASSED 00:05:41.017 free 0x200000600000 8388608 00:05:41.017 unregister 0x200000400000 10485760 PASSED 00:05:41.017 passed 00:05:41.017 00:05:41.017 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.017 suites 1 1 n/a 0 0 00:05:41.017 tests 1 1 1 0 0 00:05:41.017 asserts 15 15 15 0 n/a 00:05:41.017 00:05:41.017 Elapsed time = 0.011 seconds 00:05:41.017 00:05:41.017 real 0m0.181s 00:05:41.017 user 0m0.026s 00:05:41.017 sys 0m0.053s 00:05:41.017 22:50:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.017 22:50:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.017 ************************************ 00:05:41.017 END TEST env_mem_callbacks 00:05:41.017 ************************************ 00:05:41.017 00:05:41.017 real 0m2.544s 00:05:41.017 user 0m1.068s 00:05:41.017 sys 0m1.093s 00:05:41.017 22:50:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.017 ************************************ 00:05:41.017 END TEST env 00:05:41.017 ************************************ 00:05:41.017 22:50:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.017 22:50:20 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.017 22:50:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.017 22:50:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.017 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:41.017 ************************************ 00:05:41.017 START TEST rpc 00:05:41.017 ************************************ 00:05:41.017 22:50:20 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.017 * Looking for test storage... 00:05:41.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.017 22:50:20 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.017 22:50:20 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.017 22:50:20 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.279 22:50:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.279 22:50:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.279 22:50:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.279 22:50:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.279 22:50:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.279 22:50:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.279 22:50:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.279 22:50:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.279 22:50:20 rpc -- scripts/common.sh@345 -- # : 1 00:05:41.279 22:50:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.279 22:50:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.279 22:50:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.279 22:50:20 rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.279 22:50:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.279 22:50:20 rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.279 22:50:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.279 22:50:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.279 22:50:20 rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.279 22:50:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.279 22:50:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.279 22:50:20 rpc -- scripts/common.sh@368 -- # return 0 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.279 --rc genhtml_branch_coverage=1 00:05:41.279 --rc genhtml_function_coverage=1 00:05:41.279 --rc genhtml_legend=1 00:05:41.279 --rc geninfo_all_blocks=1 00:05:41.279 --rc geninfo_unexecuted_blocks=1 00:05:41.279 00:05:41.279 ' 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.279 --rc genhtml_branch_coverage=1 00:05:41.279 --rc genhtml_function_coverage=1 00:05:41.279 --rc genhtml_legend=1 00:05:41.279 --rc geninfo_all_blocks=1 00:05:41.279 --rc geninfo_unexecuted_blocks=1 00:05:41.279 00:05:41.279 ' 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.279 --rc genhtml_branch_coverage=1 00:05:41.279 --rc genhtml_function_coverage=1 00:05:41.279 --rc genhtml_legend=1 00:05:41.279 --rc geninfo_all_blocks=1 00:05:41.279 --rc geninfo_unexecuted_blocks=1 00:05:41.279 00:05:41.279 ' 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.279 --rc genhtml_branch_coverage=1 00:05:41.279 --rc genhtml_function_coverage=1 00:05:41.279 --rc genhtml_legend=1 00:05:41.279 --rc geninfo_all_blocks=1 00:05:41.279 --rc geninfo_unexecuted_blocks=1 00:05:41.279 00:05:41.279 ' 00:05:41.279 22:50:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70944 00:05:41.279 22:50:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.279 22:50:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70944 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 70944 ']' 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.279 22:50:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.279 22:50:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:41.279 [2024-11-26 22:50:20.237329] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:41.279 [2024-11-26 22:50:20.237438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70944 ] 00:05:41.279 [2024-11-26 22:50:20.369236] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:41.279 [2024-11-26 22:50:20.398145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.540 [2024-11-26 22:50:20.416534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.540 [2024-11-26 22:50:20.416573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70944' to capture a snapshot of events at runtime. 00:05:41.540 [2024-11-26 22:50:20.416582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.540 [2024-11-26 22:50:20.416592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.541 [2024-11-26 22:50:20.416599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70944 for offline analysis/debug. 00:05:41.541 [2024-11-26 22:50:20.416898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.114 22:50:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.114 22:50:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.114 22:50:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.114 22:50:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.114 22:50:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.114 22:50:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.114 22:50:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.114 22:50:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.114 22:50:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 ************************************ 00:05:42.114 START TEST rpc_integrity 00:05:42.114 ************************************ 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.114 { 00:05:42.114 "name": "Malloc0", 00:05:42.114 "aliases": [ 00:05:42.114 "6ba9c05b-660f-49be-91f6-8dc3c45f4a04" 00:05:42.114 ], 00:05:42.114 "product_name": "Malloc disk", 00:05:42.114 "block_size": 512, 00:05:42.114 "num_blocks": 16384, 00:05:42.114 "uuid": "6ba9c05b-660f-49be-91f6-8dc3c45f4a04", 00:05:42.114 "assigned_rate_limits": { 00:05:42.114 "rw_ios_per_sec": 0, 00:05:42.114 "rw_mbytes_per_sec": 0, 00:05:42.114 "r_mbytes_per_sec": 0, 00:05:42.114 "w_mbytes_per_sec": 0 00:05:42.114 }, 00:05:42.114 "claimed": false, 00:05:42.114 "zoned": false, 00:05:42.114 "supported_io_types": { 00:05:42.114 "read": true, 00:05:42.114 "write": true, 00:05:42.114 "unmap": true, 00:05:42.114 "flush": true, 00:05:42.114 "reset": true, 00:05:42.114 "nvme_admin": false, 00:05:42.114 "nvme_io": false, 00:05:42.114 "nvme_io_md": false, 00:05:42.114 "write_zeroes": true, 00:05:42.114 "zcopy": true, 00:05:42.114 "get_zone_info": false, 00:05:42.114 "zone_management": false, 00:05:42.114 "zone_append": false, 00:05:42.114 "compare": false, 00:05:42.114 "compare_and_write": false, 00:05:42.114 "abort": true, 00:05:42.114 "seek_hole": false, 00:05:42.114 "seek_data": false, 00:05:42.114 "copy": true, 00:05:42.114 "nvme_iov_md": false 00:05:42.114 }, 00:05:42.114 "memory_domains": [ 00:05:42.114 { 00:05:42.114 "dma_device_id": "system", 00:05:42.114 "dma_device_type": 1 00:05:42.114 }, 00:05:42.114 { 00:05:42.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.114 "dma_device_type": 2 00:05:42.114 } 00:05:42.114 ], 00:05:42.114 "driver_specific": {} 00:05:42.114 } 00:05:42.114 ]' 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 [2024-11-26 22:50:21.179088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.114 [2024-11-26 22:50:21.179246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.114 [2024-11-26 22:50:21.179277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:42.114 [2024-11-26 22:50:21.179289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.114 [2024-11-26 22:50:21.181499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.114 [2024-11-26 22:50:21.181534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.114 Passthru0 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.114 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.114 { 00:05:42.114 "name": "Malloc0", 00:05:42.114 "aliases": [ 00:05:42.114 "6ba9c05b-660f-49be-91f6-8dc3c45f4a04" 00:05:42.114 ], 00:05:42.114 "product_name": "Malloc disk", 00:05:42.115 "block_size": 512, 00:05:42.115 "num_blocks": 16384, 00:05:42.115 "uuid": "6ba9c05b-660f-49be-91f6-8dc3c45f4a04", 00:05:42.115 "assigned_rate_limits": { 00:05:42.115 "rw_ios_per_sec": 0, 00:05:42.115 "rw_mbytes_per_sec": 0, 00:05:42.115 "r_mbytes_per_sec": 0, 00:05:42.115 "w_mbytes_per_sec": 0 00:05:42.115 }, 00:05:42.115 "claimed": true, 00:05:42.115 "claim_type": "exclusive_write", 00:05:42.115 "zoned": false, 00:05:42.115 "supported_io_types": { 00:05:42.115 "read": true, 00:05:42.115 "write": true, 00:05:42.115 "unmap": true, 00:05:42.115 "flush": true, 00:05:42.115 "reset": true, 00:05:42.115 "nvme_admin": false, 00:05:42.115 "nvme_io": false, 00:05:42.115 "nvme_io_md": false, 00:05:42.115 "write_zeroes": true, 00:05:42.115 "zcopy": true, 00:05:42.115 "get_zone_info": false, 00:05:42.115 "zone_management": false, 00:05:42.115 "zone_append": false, 00:05:42.115 "compare": false, 00:05:42.115 "compare_and_write": false, 00:05:42.115 "abort": true, 00:05:42.115 "seek_hole": false, 00:05:42.115 "seek_data": false, 00:05:42.115 "copy": true, 00:05:42.115 "nvme_iov_md": false 00:05:42.115 }, 00:05:42.115 "memory_domains": [ 00:05:42.115 { 00:05:42.115 "dma_device_id": "system", 00:05:42.115 "dma_device_type": 1 00:05:42.115 }, 00:05:42.115 { 00:05:42.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.115 "dma_device_type": 2 00:05:42.115 } 00:05:42.115 ], 00:05:42.115 "driver_specific": {} 00:05:42.115 }, 00:05:42.115 { 00:05:42.115 "name": "Passthru0", 00:05:42.115 "aliases": [ 00:05:42.115 "dbd3dc47-b2a4-57e8-8aa5-ca907044d159" 00:05:42.115 ], 00:05:42.115 "product_name": "passthru", 00:05:42.115 "block_size": 512, 00:05:42.115 "num_blocks": 16384, 00:05:42.115 "uuid": "dbd3dc47-b2a4-57e8-8aa5-ca907044d159", 00:05:42.115 "assigned_rate_limits": { 00:05:42.115 "rw_ios_per_sec": 0, 00:05:42.115 "rw_mbytes_per_sec": 0, 00:05:42.115 "r_mbytes_per_sec": 0, 00:05:42.115 "w_mbytes_per_sec": 0 00:05:42.115 }, 00:05:42.115 "claimed": false, 00:05:42.115 "zoned": false, 00:05:42.115 "supported_io_types": { 00:05:42.115 "read": true, 00:05:42.115 "write": true, 00:05:42.115 "unmap": true, 00:05:42.115 "flush": true, 00:05:42.115 "reset": true, 00:05:42.115 "nvme_admin": false, 00:05:42.115 "nvme_io": false, 00:05:42.115 "nvme_io_md": false, 00:05:42.115 "write_zeroes": true, 00:05:42.115 "zcopy": true, 00:05:42.115 "get_zone_info": false, 00:05:42.115 "zone_management": false, 00:05:42.115 "zone_append": false, 00:05:42.115 "compare": false, 00:05:42.115 "compare_and_write": false, 00:05:42.115 "abort": true, 00:05:42.115 "seek_hole": false, 00:05:42.115 "seek_data": false, 00:05:42.115 "copy": true, 00:05:42.115 "nvme_iov_md": false 00:05:42.115 }, 00:05:42.115 "memory_domains": [ 00:05:42.115 { 00:05:42.115 "dma_device_id": "system", 00:05:42.115 "dma_device_type": 1 00:05:42.115 }, 00:05:42.115 { 00:05:42.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.115 "dma_device_type": 2 00:05:42.115 } 00:05:42.115 ], 00:05:42.115 "driver_specific": { 00:05:42.115 "passthru": { 00:05:42.115 "name": "Passthru0", 00:05:42.115 "base_bdev_name": "Malloc0" 00:05:42.115 } 00:05:42.115 } 00:05:42.115 } 00:05:42.115 ]' 00:05:42.115 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.115 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.115 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.115 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.115 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.376 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.376 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.376 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.376 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.376 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.376 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.376 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.376 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.376 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.376 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.376 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.376 ************************************ 00:05:42.376 END TEST rpc_integrity 00:05:42.376 ************************************ 00:05:42.376 22:50:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.376 00:05:42.377 real 0m0.222s 00:05:42.377 user 0m0.116s 00:05:42.377 sys 0m0.040s 00:05:42.377 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.377 22:50:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.377 22:50:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.377 22:50:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.377 22:50:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.377 22:50:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.377 ************************************ 00:05:42.377 START TEST rpc_plugins 00:05:42.377 ************************************ 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.377 { 00:05:42.377 "name": "Malloc1", 00:05:42.377 "aliases": [ 00:05:42.377 "6d7cb261-4d57-41e8-8029-d059fb92f38c" 00:05:42.377 ], 00:05:42.377 "product_name": "Malloc disk", 00:05:42.377 "block_size": 4096, 00:05:42.377 "num_blocks": 256, 00:05:42.377 "uuid": "6d7cb261-4d57-41e8-8029-d059fb92f38c", 00:05:42.377 "assigned_rate_limits": { 00:05:42.377 "rw_ios_per_sec": 0, 00:05:42.377 "rw_mbytes_per_sec": 0, 00:05:42.377 "r_mbytes_per_sec": 0, 00:05:42.377 "w_mbytes_per_sec": 0 00:05:42.377 }, 00:05:42.377 "claimed": false, 00:05:42.377 "zoned": false, 00:05:42.377 "supported_io_types": { 00:05:42.377 "read": true, 00:05:42.377 "write": true, 00:05:42.377 "unmap": true, 00:05:42.377 "flush": true, 00:05:42.377 "reset": true, 00:05:42.377 "nvme_admin": false, 00:05:42.377 "nvme_io": false, 00:05:42.377 "nvme_io_md": false, 00:05:42.377 "write_zeroes": true, 00:05:42.377 "zcopy": true, 00:05:42.377 "get_zone_info": false, 00:05:42.377 "zone_management": false, 00:05:42.377 "zone_append": false, 00:05:42.377 "compare": false, 00:05:42.377 "compare_and_write": false, 00:05:42.377 "abort": true, 00:05:42.377 "seek_hole": false, 00:05:42.377 "seek_data": false, 00:05:42.377 "copy": true, 00:05:42.377 "nvme_iov_md": false 00:05:42.377 }, 00:05:42.377 "memory_domains": [ 00:05:42.377 { 00:05:42.377 "dma_device_id": "system", 00:05:42.377 "dma_device_type": 1 00:05:42.377 }, 00:05:42.377 { 00:05:42.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.377 "dma_device_type": 2 00:05:42.377 } 00:05:42.377 ], 00:05:42.377 "driver_specific": {} 00:05:42.377 } 00:05:42.377 ]' 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.377 ************************************ 00:05:42.377 END TEST rpc_plugins 00:05:42.377 ************************************ 00:05:42.377 22:50:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.377 00:05:42.377 real 0m0.107s 00:05:42.377 user 0m0.055s 00:05:42.377 sys 0m0.020s 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.377 22:50:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.377 22:50:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.377 22:50:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.377 22:50:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.377 22:50:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.639 ************************************ 00:05:42.639 START TEST rpc_trace_cmd_test 00:05:42.639 ************************************ 00:05:42.639 22:50:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:42.639 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.639 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.639 22:50:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.639 22:50:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.639 22:50:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.639 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.639 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70944", 00:05:42.639 "tpoint_group_mask": "0x8", 00:05:42.639 "iscsi_conn": { 00:05:42.639 "mask": "0x2", 00:05:42.639 "tpoint_mask": "0x0" 00:05:42.639 }, 00:05:42.639 "scsi": { 00:05:42.639 "mask": "0x4", 00:05:42.639 "tpoint_mask": "0x0" 00:05:42.639 }, 00:05:42.639 "bdev": { 00:05:42.639 "mask": "0x8", 00:05:42.639 "tpoint_mask": "0xffffffffffffffff" 00:05:42.639 }, 00:05:42.639 "nvmf_rdma": { 00:05:42.639 "mask": "0x10", 00:05:42.639 "tpoint_mask": "0x0" 00:05:42.639 }, 00:05:42.639 "nvmf_tcp": { 00:05:42.639 "mask": "0x20", 00:05:42.639 "tpoint_mask": "0x0" 00:05:42.639 }, 00:05:42.639 "ftl": { 00:05:42.639 "mask": "0x40", 00:05:42.639 "tpoint_mask": "0x0" 00:05:42.639 }, 00:05:42.639 "blobfs": { 00:05:42.639 "mask": "0x80", 00:05:42.639 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "dsa": { 00:05:42.640 "mask": "0x200", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "thread": { 00:05:42.640 "mask": "0x400", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "nvme_pcie": { 00:05:42.640 "mask": "0x800", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "iaa": { 00:05:42.640 "mask": "0x1000", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "nvme_tcp": { 00:05:42.640 "mask": "0x2000", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "bdev_nvme": { 00:05:42.640 "mask": "0x4000", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "sock": { 00:05:42.640 "mask": "0x8000", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "blob": { 00:05:42.640 "mask": "0x10000", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "bdev_raid": { 00:05:42.640 "mask": "0x20000", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 }, 00:05:42.640 "scheduler": { 00:05:42.640 "mask": "0x40000", 00:05:42.640 "tpoint_mask": "0x0" 00:05:42.640 } 00:05:42.640 }' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.640 ************************************ 00:05:42.640 END TEST rpc_trace_cmd_test 00:05:42.640 ************************************ 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.640 00:05:42.640 real 0m0.175s 00:05:42.640 user 0m0.138s 00:05:42.640 sys 0m0.029s 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.640 22:50:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.640 22:50:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.640 22:50:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.640 22:50:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.640 22:50:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.640 22:50:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.640 22:50:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.640 ************************************ 00:05:42.640 START TEST rpc_daemon_integrity 00:05:42.640 ************************************ 00:05:42.640 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:42.640 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.640 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.640 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.640 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.640 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.640 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.902 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.903 { 00:05:42.903 "name": "Malloc2", 00:05:42.903 "aliases": [ 00:05:42.903 "94830b42-6cba-4a28-a67e-f7552e81aa47" 00:05:42.903 ], 00:05:42.903 "product_name": "Malloc disk", 00:05:42.903 "block_size": 512, 00:05:42.903 "num_blocks": 16384, 00:05:42.903 "uuid": "94830b42-6cba-4a28-a67e-f7552e81aa47", 00:05:42.903 "assigned_rate_limits": { 00:05:42.903 "rw_ios_per_sec": 0, 00:05:42.903 "rw_mbytes_per_sec": 0, 00:05:42.903 "r_mbytes_per_sec": 0, 00:05:42.903 "w_mbytes_per_sec": 0 00:05:42.903 }, 00:05:42.903 "claimed": false, 00:05:42.903 "zoned": false, 00:05:42.903 "supported_io_types": { 00:05:42.903 "read": true, 00:05:42.903 "write": true, 00:05:42.903 "unmap": true, 00:05:42.903 "flush": true, 00:05:42.903 "reset": true, 00:05:42.903 "nvme_admin": false, 00:05:42.903 "nvme_io": false, 00:05:42.903 "nvme_io_md": false, 00:05:42.903 "write_zeroes": true, 00:05:42.903 "zcopy": true, 00:05:42.903 "get_zone_info": false, 00:05:42.903 "zone_management": false, 00:05:42.903 "zone_append": false, 00:05:42.903 "compare": false, 00:05:42.903 "compare_and_write": false, 00:05:42.903 "abort": true, 00:05:42.903 "seek_hole": false, 00:05:42.903 "seek_data": false, 00:05:42.903 "copy": true, 00:05:42.903 "nvme_iov_md": false 00:05:42.903 }, 00:05:42.903 "memory_domains": [ 00:05:42.903 { 00:05:42.903 "dma_device_id": "system", 00:05:42.903 "dma_device_type": 1 00:05:42.903 }, 00:05:42.903 { 00:05:42.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.903 "dma_device_type": 2 00:05:42.903 } 00:05:42.903 ], 00:05:42.903 "driver_specific": {} 00:05:42.903 } 00:05:42.903 ]' 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.903 [2024-11-26 22:50:21.859427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.903 [2024-11-26 22:50:21.859479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.903 [2024-11-26 22:50:21.859499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:42.903 [2024-11-26 22:50:21.859509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.903 [2024-11-26 22:50:21.861658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.903 [2024-11-26 22:50:21.861795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.903 Passthru0 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.903 { 00:05:42.903 "name": "Malloc2", 00:05:42.903 "aliases": [ 00:05:42.903 "94830b42-6cba-4a28-a67e-f7552e81aa47" 00:05:42.903 ], 00:05:42.903 "product_name": "Malloc disk", 00:05:42.903 "block_size": 512, 00:05:42.903 "num_blocks": 16384, 00:05:42.903 "uuid": "94830b42-6cba-4a28-a67e-f7552e81aa47", 00:05:42.903 "assigned_rate_limits": { 00:05:42.903 "rw_ios_per_sec": 0, 00:05:42.903 "rw_mbytes_per_sec": 0, 00:05:42.903 "r_mbytes_per_sec": 0, 00:05:42.903 "w_mbytes_per_sec": 0 00:05:42.903 }, 00:05:42.903 "claimed": true, 00:05:42.903 "claim_type": "exclusive_write", 00:05:42.903 "zoned": false, 00:05:42.903 "supported_io_types": { 00:05:42.903 "read": true, 00:05:42.903 "write": true, 00:05:42.903 "unmap": true, 00:05:42.903 "flush": true, 00:05:42.903 "reset": true, 00:05:42.903 "nvme_admin": false, 00:05:42.903 "nvme_io": false, 00:05:42.903 "nvme_io_md": false, 00:05:42.903 "write_zeroes": true, 00:05:42.903 "zcopy": true, 00:05:42.903 "get_zone_info": false, 00:05:42.903 "zone_management": false, 00:05:42.903 "zone_append": false, 00:05:42.903 "compare": false, 00:05:42.903 "compare_and_write": false, 00:05:42.903 "abort": true, 00:05:42.903 "seek_hole": false, 00:05:42.903 "seek_data": false, 00:05:42.903 "copy": true, 00:05:42.903 "nvme_iov_md": false 00:05:42.903 }, 00:05:42.903 "memory_domains": [ 00:05:42.903 { 00:05:42.903 "dma_device_id": "system", 00:05:42.903 "dma_device_type": 1 00:05:42.903 }, 00:05:42.903 { 00:05:42.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.903 "dma_device_type": 2 00:05:42.903 } 00:05:42.903 ], 00:05:42.903 "driver_specific": {} 00:05:42.903 }, 00:05:42.903 { 00:05:42.903 "name": "Passthru0", 00:05:42.903 "aliases": [ 00:05:42.903 "64a2ba3a-4424-5eee-a936-4b18c5db75e6" 00:05:42.903 ], 00:05:42.903 "product_name": "passthru", 00:05:42.903 "block_size": 512, 00:05:42.903 "num_blocks": 16384, 00:05:42.903 "uuid": "64a2ba3a-4424-5eee-a936-4b18c5db75e6", 00:05:42.903 "assigned_rate_limits": { 00:05:42.903 "rw_ios_per_sec": 0, 00:05:42.903 "rw_mbytes_per_sec": 0, 00:05:42.903 "r_mbytes_per_sec": 0, 00:05:42.903 "w_mbytes_per_sec": 0 00:05:42.903 }, 00:05:42.903 "claimed": false, 00:05:42.903 "zoned": false, 00:05:42.903 "supported_io_types": { 00:05:42.903 "read": true, 00:05:42.903 "write": true, 00:05:42.903 "unmap": true, 00:05:42.903 "flush": true, 00:05:42.903 "reset": true, 00:05:42.903 "nvme_admin": false, 00:05:42.903 "nvme_io": false, 00:05:42.903 "nvme_io_md": false, 00:05:42.903 "write_zeroes": true, 00:05:42.903 "zcopy": true, 00:05:42.903 "get_zone_info": false, 00:05:42.903 "zone_management": false, 00:05:42.903 "zone_append": false, 00:05:42.903 "compare": false, 00:05:42.903 "compare_and_write": false, 00:05:42.903 "abort": true, 00:05:42.903 "seek_hole": false, 00:05:42.903 "seek_data": false, 00:05:42.903 "copy": true, 00:05:42.903 "nvme_iov_md": false 00:05:42.903 }, 00:05:42.903 "memory_domains": [ 00:05:42.903 { 00:05:42.903 "dma_device_id": "system", 00:05:42.903 "dma_device_type": 1 00:05:42.903 }, 00:05:42.903 { 00:05:42.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.903 "dma_device_type": 2 00:05:42.903 } 00:05:42.903 ], 00:05:42.903 "driver_specific": { 00:05:42.903 "passthru": { 00:05:42.903 "name": "Passthru0", 00:05:42.903 "base_bdev_name": "Malloc2" 00:05:42.903 } 00:05:42.903 } 00:05:42.903 } 00:05:42.903 ]' 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.903 ************************************ 00:05:42.903 END TEST rpc_daemon_integrity 00:05:42.903 ************************************ 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.903 00:05:42.903 real 0m0.242s 00:05:42.903 user 0m0.148s 00:05:42.903 sys 0m0.030s 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.903 22:50:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.165 22:50:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.165 22:50:22 rpc -- rpc/rpc.sh@84 -- # killprocess 70944 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 70944 ']' 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@958 -- # kill -0 70944 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@959 -- # uname 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70944 00:05:43.165 killing process with pid 70944 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70944' 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@973 -- # kill 70944 00:05:43.165 22:50:22 rpc -- common/autotest_common.sh@978 -- # wait 70944 00:05:43.427 00:05:43.427 real 0m2.295s 00:05:43.427 user 0m2.729s 00:05:43.427 sys 0m0.610s 00:05:43.427 22:50:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.427 ************************************ 00:05:43.427 22:50:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.427 END TEST rpc 00:05:43.427 ************************************ 00:05:43.427 22:50:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.427 22:50:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.427 22:50:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.427 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.427 ************************************ 00:05:43.427 START TEST skip_rpc 00:05:43.427 ************************************ 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.427 * Looking for test storage... 00:05:43.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.427 22:50:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.427 --rc genhtml_branch_coverage=1 00:05:43.427 --rc genhtml_function_coverage=1 00:05:43.427 --rc genhtml_legend=1 00:05:43.427 --rc geninfo_all_blocks=1 00:05:43.427 --rc geninfo_unexecuted_blocks=1 00:05:43.427 00:05:43.427 ' 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.427 --rc genhtml_branch_coverage=1 00:05:43.427 --rc genhtml_function_coverage=1 00:05:43.427 --rc genhtml_legend=1 00:05:43.427 --rc geninfo_all_blocks=1 00:05:43.427 --rc geninfo_unexecuted_blocks=1 00:05:43.427 00:05:43.427 ' 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.427 --rc genhtml_branch_coverage=1 00:05:43.427 --rc genhtml_function_coverage=1 00:05:43.427 --rc genhtml_legend=1 00:05:43.427 --rc geninfo_all_blocks=1 00:05:43.427 --rc geninfo_unexecuted_blocks=1 00:05:43.427 00:05:43.427 ' 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.427 --rc genhtml_branch_coverage=1 00:05:43.427 --rc genhtml_function_coverage=1 00:05:43.427 --rc genhtml_legend=1 00:05:43.427 --rc geninfo_all_blocks=1 00:05:43.427 --rc geninfo_unexecuted_blocks=1 00:05:43.427 00:05:43.427 ' 00:05:43.427 22:50:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.427 22:50:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.427 22:50:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.427 22:50:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.427 ************************************ 00:05:43.427 START TEST skip_rpc 00:05:43.427 ************************************ 00:05:43.427 22:50:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:43.427 22:50:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71145 00:05:43.427 22:50:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.427 22:50:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.427 22:50:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.689 [2024-11-26 22:50:22.605351] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:43.689 [2024-11-26 22:50:22.605486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71145 ] 00:05:43.689 [2024-11-26 22:50:22.740723] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.689 [2024-11-26 22:50:22.770500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.689 [2024-11-26 22:50:22.791370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71145 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71145 ']' 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71145 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71145 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71145' 00:05:49.024 killing process with pid 71145 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71145 00:05:49.024 22:50:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71145 00:05:49.024 00:05:49.024 real 0m5.513s 00:05:49.024 user 0m5.122s 00:05:49.024 sys 0m0.285s 00:05:49.024 22:50:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.024 ************************************ 00:05:49.024 END TEST skip_rpc 00:05:49.024 ************************************ 00:05:49.024 22:50:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.024 22:50:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.024 22:50:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.024 22:50:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.024 22:50:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.024 ************************************ 00:05:49.024 START TEST skip_rpc_with_json 00:05:49.024 ************************************ 00:05:49.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71227 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71227 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71227 ']' 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.024 22:50:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.286 [2024-11-26 22:50:28.203323] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:49.286 [2024-11-26 22:50:28.203464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71227 ] 00:05:49.286 [2024-11-26 22:50:28.339158] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.286 [2024-11-26 22:50:28.369416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.286 [2024-11-26 22:50:28.396287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.228 [2024-11-26 22:50:29.046615] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.228 request: 00:05:50.228 { 00:05:50.228 "trtype": "tcp", 00:05:50.228 "method": "nvmf_get_transports", 00:05:50.228 "req_id": 1 00:05:50.228 } 00:05:50.228 Got JSON-RPC error response 00:05:50.228 response: 00:05:50.228 { 00:05:50.228 "code": -19, 00:05:50.228 "message": "No such device" 00:05:50.228 } 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.228 [2024-11-26 22:50:29.054722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.228 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.229 { 00:05:50.229 "subsystems": [ 00:05:50.229 { 00:05:50.229 "subsystem": "fsdev", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "fsdev_set_opts", 00:05:50.229 "params": { 00:05:50.229 "fsdev_io_pool_size": 65535, 00:05:50.229 "fsdev_io_cache_size": 256 00:05:50.229 } 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "keyring", 00:05:50.229 "config": [] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "iobuf", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "iobuf_set_options", 00:05:50.229 "params": { 00:05:50.229 "small_pool_count": 8192, 00:05:50.229 "large_pool_count": 1024, 00:05:50.229 "small_bufsize": 8192, 00:05:50.229 "large_bufsize": 135168, 00:05:50.229 "enable_numa": false 00:05:50.229 } 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "sock", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "sock_set_default_impl", 00:05:50.229 "params": { 00:05:50.229 "impl_name": "posix" 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "sock_impl_set_options", 00:05:50.229 "params": { 00:05:50.229 "impl_name": "ssl", 00:05:50.229 "recv_buf_size": 4096, 00:05:50.229 "send_buf_size": 4096, 00:05:50.229 "enable_recv_pipe": true, 00:05:50.229 "enable_quickack": false, 00:05:50.229 "enable_placement_id": 0, 00:05:50.229 "enable_zerocopy_send_server": true, 00:05:50.229 "enable_zerocopy_send_client": false, 00:05:50.229 "zerocopy_threshold": 0, 00:05:50.229 "tls_version": 0, 00:05:50.229 "enable_ktls": false 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "sock_impl_set_options", 00:05:50.229 "params": { 00:05:50.229 "impl_name": "posix", 00:05:50.229 "recv_buf_size": 2097152, 00:05:50.229 "send_buf_size": 2097152, 00:05:50.229 "enable_recv_pipe": true, 00:05:50.229 "enable_quickack": false, 00:05:50.229 "enable_placement_id": 0, 00:05:50.229 "enable_zerocopy_send_server": true, 00:05:50.229 "enable_zerocopy_send_client": false, 00:05:50.229 "zerocopy_threshold": 0, 00:05:50.229 "tls_version": 0, 00:05:50.229 "enable_ktls": false 00:05:50.229 } 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "vmd", 00:05:50.229 "config": [] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "accel", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "accel_set_options", 00:05:50.229 "params": { 00:05:50.229 "small_cache_size": 128, 00:05:50.229 "large_cache_size": 16, 00:05:50.229 "task_count": 2048, 00:05:50.229 "sequence_count": 2048, 00:05:50.229 "buf_count": 2048 00:05:50.229 } 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "bdev", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "bdev_set_options", 00:05:50.229 "params": { 00:05:50.229 "bdev_io_pool_size": 65535, 00:05:50.229 "bdev_io_cache_size": 256, 00:05:50.229 "bdev_auto_examine": true, 00:05:50.229 "iobuf_small_cache_size": 128, 00:05:50.229 "iobuf_large_cache_size": 16 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "bdev_raid_set_options", 00:05:50.229 "params": { 00:05:50.229 "process_window_size_kb": 1024, 00:05:50.229 "process_max_bandwidth_mb_sec": 0 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "bdev_iscsi_set_options", 00:05:50.229 "params": { 00:05:50.229 "timeout_sec": 30 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "bdev_nvme_set_options", 00:05:50.229 "params": { 00:05:50.229 "action_on_timeout": "none", 00:05:50.229 "timeout_us": 0, 00:05:50.229 "timeout_admin_us": 0, 00:05:50.229 "keep_alive_timeout_ms": 10000, 00:05:50.229 "arbitration_burst": 0, 00:05:50.229 "low_priority_weight": 0, 00:05:50.229 "medium_priority_weight": 0, 00:05:50.229 "high_priority_weight": 0, 00:05:50.229 "nvme_adminq_poll_period_us": 10000, 00:05:50.229 "nvme_ioq_poll_period_us": 0, 00:05:50.229 "io_queue_requests": 0, 00:05:50.229 "delay_cmd_submit": true, 00:05:50.229 "transport_retry_count": 4, 00:05:50.229 "bdev_retry_count": 3, 00:05:50.229 "transport_ack_timeout": 0, 00:05:50.229 "ctrlr_loss_timeout_sec": 0, 00:05:50.229 "reconnect_delay_sec": 0, 00:05:50.229 "fast_io_fail_timeout_sec": 0, 00:05:50.229 "disable_auto_failback": false, 00:05:50.229 "generate_uuids": false, 00:05:50.229 "transport_tos": 0, 00:05:50.229 "nvme_error_stat": false, 00:05:50.229 "rdma_srq_size": 0, 00:05:50.229 "io_path_stat": false, 00:05:50.229 "allow_accel_sequence": false, 00:05:50.229 "rdma_max_cq_size": 0, 00:05:50.229 "rdma_cm_event_timeout_ms": 0, 00:05:50.229 "dhchap_digests": [ 00:05:50.229 "sha256", 00:05:50.229 "sha384", 00:05:50.229 "sha512" 00:05:50.229 ], 00:05:50.229 "dhchap_dhgroups": [ 00:05:50.229 "null", 00:05:50.229 "ffdhe2048", 00:05:50.229 "ffdhe3072", 00:05:50.229 "ffdhe4096", 00:05:50.229 "ffdhe6144", 00:05:50.229 "ffdhe8192" 00:05:50.229 ] 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "bdev_nvme_set_hotplug", 00:05:50.229 "params": { 00:05:50.229 "period_us": 100000, 00:05:50.229 "enable": false 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "bdev_wait_for_examine" 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "scsi", 00:05:50.229 "config": null 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "scheduler", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "framework_set_scheduler", 00:05:50.229 "params": { 00:05:50.229 "name": "static" 00:05:50.229 } 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "vhost_scsi", 00:05:50.229 "config": [] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "vhost_blk", 00:05:50.229 "config": [] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "ublk", 00:05:50.229 "config": [] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "nbd", 00:05:50.229 "config": [] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "nvmf", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "nvmf_set_config", 00:05:50.229 "params": { 00:05:50.229 "discovery_filter": "match_any", 00:05:50.229 "admin_cmd_passthru": { 00:05:50.229 "identify_ctrlr": false 00:05:50.229 }, 00:05:50.229 "dhchap_digests": [ 00:05:50.229 "sha256", 00:05:50.229 "sha384", 00:05:50.229 "sha512" 00:05:50.229 ], 00:05:50.229 "dhchap_dhgroups": [ 00:05:50.229 "null", 00:05:50.229 "ffdhe2048", 00:05:50.229 "ffdhe3072", 00:05:50.229 "ffdhe4096", 00:05:50.229 "ffdhe6144", 00:05:50.229 "ffdhe8192" 00:05:50.229 ] 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "nvmf_set_max_subsystems", 00:05:50.229 "params": { 00:05:50.229 "max_subsystems": 1024 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "nvmf_set_crdt", 00:05:50.229 "params": { 00:05:50.229 "crdt1": 0, 00:05:50.229 "crdt2": 0, 00:05:50.229 "crdt3": 0 00:05:50.229 } 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "method": "nvmf_create_transport", 00:05:50.229 "params": { 00:05:50.229 "trtype": "TCP", 00:05:50.229 "max_queue_depth": 128, 00:05:50.229 "max_io_qpairs_per_ctrlr": 127, 00:05:50.229 "in_capsule_data_size": 4096, 00:05:50.229 "max_io_size": 131072, 00:05:50.229 "io_unit_size": 131072, 00:05:50.229 "max_aq_depth": 128, 00:05:50.229 "num_shared_buffers": 511, 00:05:50.229 "buf_cache_size": 4294967295, 00:05:50.229 "dif_insert_or_strip": false, 00:05:50.229 "zcopy": false, 00:05:50.229 "c2h_success": true, 00:05:50.229 "sock_priority": 0, 00:05:50.229 "abort_timeout_sec": 1, 00:05:50.229 "ack_timeout": 0, 00:05:50.229 "data_wr_pool_size": 0 00:05:50.229 } 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 }, 00:05:50.229 { 00:05:50.229 "subsystem": "iscsi", 00:05:50.229 "config": [ 00:05:50.229 { 00:05:50.229 "method": "iscsi_set_options", 00:05:50.229 "params": { 00:05:50.229 "node_base": "iqn.2016-06.io.spdk", 00:05:50.229 "max_sessions": 128, 00:05:50.229 "max_connections_per_session": 2, 00:05:50.229 "max_queue_depth": 64, 00:05:50.229 "default_time2wait": 2, 00:05:50.229 "default_time2retain": 20, 00:05:50.229 "first_burst_length": 8192, 00:05:50.229 "immediate_data": true, 00:05:50.229 "allow_duplicated_isid": false, 00:05:50.229 "error_recovery_level": 0, 00:05:50.229 "nop_timeout": 60, 00:05:50.229 "nop_in_interval": 30, 00:05:50.229 "disable_chap": false, 00:05:50.229 "require_chap": false, 00:05:50.229 "mutual_chap": false, 00:05:50.229 "chap_group": 0, 00:05:50.229 "max_large_datain_per_connection": 64, 00:05:50.229 "max_r2t_per_connection": 4, 00:05:50.229 "pdu_pool_size": 36864, 00:05:50.229 "immediate_data_pool_size": 16384, 00:05:50.229 "data_out_pool_size": 2048 00:05:50.229 } 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 } 00:05:50.229 ] 00:05:50.229 } 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71227 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71227 ']' 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71227 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71227 00:05:50.229 killing process with pid 71227 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71227' 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71227 00:05:50.229 22:50:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71227 00:05:50.799 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71261 00:05:50.799 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.799 22:50:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71261 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71261 ']' 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71261 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71261 00:05:56.076 killing process with pid 71261 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71261' 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71261 00:05:56.076 22:50:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71261 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.076 00:05:56.076 real 0m6.972s 00:05:56.076 user 0m6.349s 00:05:56.076 sys 0m0.863s 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.076 ************************************ 00:05:56.076 END TEST skip_rpc_with_json 00:05:56.076 ************************************ 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.076 22:50:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.076 22:50:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.076 22:50:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.076 22:50:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.076 ************************************ 00:05:56.076 START TEST skip_rpc_with_delay 00:05:56.076 ************************************ 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:56.076 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.336 [2024-11-26 22:50:35.205630] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:56.337 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:56.337 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.337 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.337 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.337 00:05:56.337 real 0m0.124s 00:05:56.337 user 0m0.061s 00:05:56.337 sys 0m0.062s 00:05:56.337 ************************************ 00:05:56.337 END TEST skip_rpc_with_delay 00:05:56.337 ************************************ 00:05:56.337 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.337 22:50:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:56.337 22:50:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:56.337 22:50:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:56.337 22:50:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:56.337 22:50:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.337 22:50:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.337 22:50:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.337 ************************************ 00:05:56.337 START TEST exit_on_failed_rpc_init 00:05:56.337 ************************************ 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71367 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71367 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71367 ']' 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.337 22:50:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.337 [2024-11-26 22:50:35.389193] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:56.337 [2024-11-26 22:50:35.389318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71367 ] 00:05:56.596 [2024-11-26 22:50:35.521375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.596 [2024-11-26 22:50:35.546944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.596 [2024-11-26 22:50:35.569109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:57.162 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.420 [2024-11-26 22:50:36.292143] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:57.420 [2024-11-26 22:50:36.292258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71385 ] 00:05:57.420 [2024-11-26 22:50:36.422904] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.420 [2024-11-26 22:50:36.445430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.420 [2024-11-26 22:50:36.463618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.420 [2024-11-26 22:50:36.463697] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.420 [2024-11-26 22:50:36.463710] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.420 [2024-11-26 22:50:36.463721] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71367 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71367 ']' 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71367 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.420 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71367 00:05:57.679 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.679 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.679 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71367' 00:05:57.679 killing process with pid 71367 00:05:57.679 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71367 00:05:57.679 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71367 00:05:57.939 00:05:57.939 real 0m1.536s 00:05:57.939 user 0m1.654s 00:05:57.939 sys 0m0.401s 00:05:57.939 ************************************ 00:05:57.939 END TEST exit_on_failed_rpc_init 00:05:57.939 ************************************ 00:05:57.939 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.939 22:50:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.939 22:50:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.939 00:05:57.939 real 0m14.516s 00:05:57.939 user 0m13.322s 00:05:57.939 sys 0m1.807s 00:05:57.940 22:50:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.940 22:50:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.940 ************************************ 00:05:57.940 END TEST skip_rpc 00:05:57.940 ************************************ 00:05:57.940 22:50:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:57.940 22:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.940 22:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.940 22:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.940 ************************************ 00:05:57.940 START TEST rpc_client 00:05:57.940 ************************************ 00:05:57.940 22:50:36 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:57.940 * Looking for test storage... 00:05:57.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:57.940 22:50:37 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.940 22:50:37 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.940 22:50:37 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.940 22:50:37 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:57.940 22:50:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.202 22:50:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:58.202 22:50:37 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.202 22:50:37 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.202 --rc genhtml_branch_coverage=1 00:05:58.202 --rc genhtml_function_coverage=1 00:05:58.202 --rc genhtml_legend=1 00:05:58.202 --rc geninfo_all_blocks=1 00:05:58.202 --rc geninfo_unexecuted_blocks=1 00:05:58.202 00:05:58.202 ' 00:05:58.202 22:50:37 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.202 --rc genhtml_branch_coverage=1 00:05:58.202 --rc genhtml_function_coverage=1 00:05:58.202 --rc genhtml_legend=1 00:05:58.202 --rc geninfo_all_blocks=1 00:05:58.202 --rc geninfo_unexecuted_blocks=1 00:05:58.202 00:05:58.202 ' 00:05:58.202 22:50:37 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.202 --rc genhtml_branch_coverage=1 00:05:58.202 --rc genhtml_function_coverage=1 00:05:58.202 --rc genhtml_legend=1 00:05:58.202 --rc geninfo_all_blocks=1 00:05:58.202 --rc geninfo_unexecuted_blocks=1 00:05:58.202 00:05:58.202 ' 00:05:58.202 22:50:37 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.202 --rc genhtml_branch_coverage=1 00:05:58.202 --rc genhtml_function_coverage=1 00:05:58.202 --rc genhtml_legend=1 00:05:58.202 --rc geninfo_all_blocks=1 00:05:58.202 --rc geninfo_unexecuted_blocks=1 00:05:58.202 00:05:58.202 ' 00:05:58.202 22:50:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:58.202 OK 00:05:58.202 22:50:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:58.202 00:05:58.202 real 0m0.190s 00:05:58.202 user 0m0.103s 00:05:58.202 sys 0m0.095s 00:05:58.202 22:50:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.202 ************************************ 00:05:58.202 END TEST rpc_client 00:05:58.202 ************************************ 00:05:58.202 22:50:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:58.202 22:50:37 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:58.202 22:50:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.202 22:50:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.202 22:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.202 ************************************ 00:05:58.202 START TEST json_config 00:05:58.202 ************************************ 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.202 22:50:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.202 22:50:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.202 22:50:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.202 22:50:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.202 22:50:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.202 22:50:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.202 22:50:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.202 22:50:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:58.202 22:50:37 json_config -- scripts/common.sh@345 -- # : 1 00:05:58.202 22:50:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.202 22:50:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.202 22:50:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:58.202 22:50:37 json_config -- scripts/common.sh@353 -- # local d=1 00:05:58.202 22:50:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.202 22:50:37 json_config -- scripts/common.sh@355 -- # echo 1 00:05:58.202 22:50:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.202 22:50:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@353 -- # local d=2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.202 22:50:37 json_config -- scripts/common.sh@355 -- # echo 2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.202 22:50:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.202 22:50:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.202 22:50:37 json_config -- scripts/common.sh@368 -- # return 0 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.202 --rc genhtml_branch_coverage=1 00:05:58.202 --rc genhtml_function_coverage=1 00:05:58.202 --rc genhtml_legend=1 00:05:58.202 --rc geninfo_all_blocks=1 00:05:58.202 --rc geninfo_unexecuted_blocks=1 00:05:58.202 00:05:58.202 ' 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.202 --rc genhtml_branch_coverage=1 00:05:58.202 --rc genhtml_function_coverage=1 00:05:58.202 --rc genhtml_legend=1 00:05:58.202 --rc geninfo_all_blocks=1 00:05:58.202 --rc geninfo_unexecuted_blocks=1 00:05:58.202 00:05:58.202 ' 00:05:58.202 22:50:37 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.202 --rc genhtml_branch_coverage=1 00:05:58.202 --rc genhtml_function_coverage=1 00:05:58.202 --rc genhtml_legend=1 00:05:58.202 --rc geninfo_all_blocks=1 00:05:58.202 --rc geninfo_unexecuted_blocks=1 00:05:58.202 00:05:58.202 ' 00:05:58.203 22:50:37 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.203 --rc genhtml_branch_coverage=1 00:05:58.203 --rc genhtml_function_coverage=1 00:05:58.203 --rc genhtml_legend=1 00:05:58.203 --rc geninfo_all_blocks=1 00:05:58.203 --rc geninfo_unexecuted_blocks=1 00:05:58.203 00:05:58.203 ' 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e86f0635-77ac-4fdf-8e71-de7b7fded113 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e86f0635-77ac-4fdf-8e71-de7b7fded113 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.203 22:50:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.203 22:50:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.203 22:50:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.203 22:50:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.203 22:50:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.203 22:50:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.203 22:50:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.203 22:50:37 json_config -- paths/export.sh@5 -- # export PATH 00:05:58.203 22:50:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@51 -- # : 0 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.203 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.203 22:50:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:58.203 WARNING: No tests are enabled so not running JSON configuration tests 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:58.203 22:50:37 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:58.203 00:05:58.203 real 0m0.134s 00:05:58.203 user 0m0.081s 00:05:58.203 sys 0m0.057s 00:05:58.203 ************************************ 00:05:58.203 END TEST json_config 00:05:58.203 ************************************ 00:05:58.203 22:50:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.203 22:50:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 22:50:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.465 22:50:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.465 22:50:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.465 22:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 ************************************ 00:05:58.465 START TEST json_config_extra_key 00:05:58.465 ************************************ 00:05:58.465 22:50:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.465 22:50:37 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.465 22:50:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.465 22:50:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.465 22:50:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.465 22:50:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.466 --rc genhtml_branch_coverage=1 00:05:58.466 --rc genhtml_function_coverage=1 00:05:58.466 --rc genhtml_legend=1 00:05:58.466 --rc geninfo_all_blocks=1 00:05:58.466 --rc geninfo_unexecuted_blocks=1 00:05:58.466 00:05:58.466 ' 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.466 --rc genhtml_branch_coverage=1 00:05:58.466 --rc genhtml_function_coverage=1 00:05:58.466 --rc genhtml_legend=1 00:05:58.466 --rc geninfo_all_blocks=1 00:05:58.466 --rc geninfo_unexecuted_blocks=1 00:05:58.466 00:05:58.466 ' 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.466 --rc genhtml_branch_coverage=1 00:05:58.466 --rc genhtml_function_coverage=1 00:05:58.466 --rc genhtml_legend=1 00:05:58.466 --rc geninfo_all_blocks=1 00:05:58.466 --rc geninfo_unexecuted_blocks=1 00:05:58.466 00:05:58.466 ' 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.466 --rc genhtml_branch_coverage=1 00:05:58.466 --rc genhtml_function_coverage=1 00:05:58.466 --rc genhtml_legend=1 00:05:58.466 --rc geninfo_all_blocks=1 00:05:58.466 --rc geninfo_unexecuted_blocks=1 00:05:58.466 00:05:58.466 ' 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e86f0635-77ac-4fdf-8e71-de7b7fded113 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e86f0635-77ac-4fdf-8e71-de7b7fded113 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.466 22:50:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.466 22:50:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.466 22:50:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.466 22:50:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.466 22:50:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:58.466 22:50:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.466 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.466 22:50:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.466 INFO: launching applications... 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:58.466 22:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71568 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.466 Waiting for target to run... 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71568 /var/tmp/spdk_tgt.sock 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71568 ']' 00:05:58.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.466 22:50:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.466 22:50:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.466 [2024-11-26 22:50:37.576386] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:05:58.466 [2024-11-26 22:50:37.576512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71568 ] 00:05:59.038 [2024-11-26 22:50:37.858105] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.038 [2024-11-26 22:50:37.887429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.038 [2024-11-26 22:50:37.902463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.299 22:50:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.299 00:05:59.299 22:50:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:59.299 INFO: shutting down applications... 00:05:59.299 22:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:59.299 22:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71568 ]] 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71568 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71568 00:05:59.299 22:50:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.871 22:50:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.871 22:50:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.871 22:50:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71568 00:05:59.871 22:50:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.871 22:50:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.871 22:50:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.871 SPDK target shutdown done 00:05:59.871 22:50:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.871 Success 00:05:59.871 22:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.871 00:05:59.871 real 0m1.554s 00:05:59.871 user 0m1.325s 00:05:59.871 sys 0m0.354s 00:05:59.871 22:50:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.871 22:50:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.871 ************************************ 00:05:59.871 END TEST json_config_extra_key 00:05:59.871 ************************************ 00:05:59.871 22:50:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.871 22:50:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.871 22:50:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.871 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.871 ************************************ 00:05:59.871 START TEST alias_rpc 00:05:59.871 ************************************ 00:05:59.871 22:50:38 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.132 * Looking for test storage... 00:06:00.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.132 22:50:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.132 --rc genhtml_branch_coverage=1 00:06:00.132 --rc genhtml_function_coverage=1 00:06:00.132 --rc genhtml_legend=1 00:06:00.132 --rc geninfo_all_blocks=1 00:06:00.132 --rc geninfo_unexecuted_blocks=1 00:06:00.132 00:06:00.132 ' 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.132 --rc genhtml_branch_coverage=1 00:06:00.132 --rc genhtml_function_coverage=1 00:06:00.132 --rc genhtml_legend=1 00:06:00.132 --rc geninfo_all_blocks=1 00:06:00.132 --rc geninfo_unexecuted_blocks=1 00:06:00.132 00:06:00.132 ' 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.132 --rc genhtml_branch_coverage=1 00:06:00.132 --rc genhtml_function_coverage=1 00:06:00.132 --rc genhtml_legend=1 00:06:00.132 --rc geninfo_all_blocks=1 00:06:00.132 --rc geninfo_unexecuted_blocks=1 00:06:00.132 00:06:00.132 ' 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.132 --rc genhtml_branch_coverage=1 00:06:00.132 --rc genhtml_function_coverage=1 00:06:00.132 --rc genhtml_legend=1 00:06:00.132 --rc geninfo_all_blocks=1 00:06:00.132 --rc geninfo_unexecuted_blocks=1 00:06:00.132 00:06:00.132 ' 00:06:00.132 22:50:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.132 22:50:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71641 00:06:00.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.132 22:50:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71641 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71641 ']' 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.132 22:50:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.132 22:50:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.132 [2024-11-26 22:50:39.159680] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:00.132 [2024-11-26 22:50:39.159787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71641 ] 00:06:00.392 [2024-11-26 22:50:39.288066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.392 [2024-11-26 22:50:39.319383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.392 [2024-11-26 22:50:39.344681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.963 22:50:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.963 22:50:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.963 22:50:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:01.224 22:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71641 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71641 ']' 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71641 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71641 00:06:01.224 killing process with pid 71641 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71641' 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 71641 00:06:01.224 22:50:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 71641 00:06:01.484 00:06:01.484 real 0m1.615s 00:06:01.484 user 0m1.709s 00:06:01.484 sys 0m0.411s 00:06:01.484 ************************************ 00:06:01.484 END TEST alias_rpc 00:06:01.484 ************************************ 00:06:01.484 22:50:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.484 22:50:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.744 22:50:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:01.744 22:50:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:01.744 22:50:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.744 22:50:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.744 22:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.744 ************************************ 00:06:01.744 START TEST spdkcli_tcp 00:06:01.744 ************************************ 00:06:01.744 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:01.744 * Looking for test storage... 00:06:01.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:01.744 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.744 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.744 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.744 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.744 22:50:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.745 22:50:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71721 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71721 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71721 ']' 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.745 22:50:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.745 22:50:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.745 [2024-11-26 22:50:40.848490] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:01.745 [2024-11-26 22:50:40.848598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71721 ] 00:06:02.006 [2024-11-26 22:50:40.981164] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.006 [2024-11-26 22:50:41.006933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.006 [2024-11-26 22:50:41.032511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.006 [2024-11-26 22:50:41.032579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.576 22:50:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.576 22:50:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:02.576 22:50:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71738 00:06:02.576 22:50:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:02.576 22:50:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.837 [ 00:06:02.837 "bdev_malloc_delete", 00:06:02.837 "bdev_malloc_create", 00:06:02.837 "bdev_null_resize", 00:06:02.837 "bdev_null_delete", 00:06:02.837 "bdev_null_create", 00:06:02.837 "bdev_nvme_cuse_unregister", 00:06:02.837 "bdev_nvme_cuse_register", 00:06:02.837 "bdev_opal_new_user", 00:06:02.837 "bdev_opal_set_lock_state", 00:06:02.837 "bdev_opal_delete", 00:06:02.837 "bdev_opal_get_info", 00:06:02.837 "bdev_opal_create", 00:06:02.837 "bdev_nvme_opal_revert", 00:06:02.837 "bdev_nvme_opal_init", 00:06:02.837 "bdev_nvme_send_cmd", 00:06:02.837 "bdev_nvme_set_keys", 00:06:02.837 "bdev_nvme_get_path_iostat", 00:06:02.837 "bdev_nvme_get_mdns_discovery_info", 00:06:02.837 "bdev_nvme_stop_mdns_discovery", 00:06:02.837 "bdev_nvme_start_mdns_discovery", 00:06:02.837 "bdev_nvme_set_multipath_policy", 00:06:02.837 "bdev_nvme_set_preferred_path", 00:06:02.837 "bdev_nvme_get_io_paths", 00:06:02.837 "bdev_nvme_remove_error_injection", 00:06:02.837 "bdev_nvme_add_error_injection", 00:06:02.837 "bdev_nvme_get_discovery_info", 00:06:02.837 "bdev_nvme_stop_discovery", 00:06:02.837 "bdev_nvme_start_discovery", 00:06:02.837 "bdev_nvme_get_controller_health_info", 00:06:02.837 "bdev_nvme_disable_controller", 00:06:02.837 "bdev_nvme_enable_controller", 00:06:02.837 "bdev_nvme_reset_controller", 00:06:02.837 "bdev_nvme_get_transport_statistics", 00:06:02.837 "bdev_nvme_apply_firmware", 00:06:02.837 "bdev_nvme_detach_controller", 00:06:02.837 "bdev_nvme_get_controllers", 00:06:02.837 "bdev_nvme_attach_controller", 00:06:02.837 "bdev_nvme_set_hotplug", 00:06:02.837 "bdev_nvme_set_options", 00:06:02.837 "bdev_passthru_delete", 00:06:02.837 "bdev_passthru_create", 00:06:02.837 "bdev_lvol_set_parent_bdev", 00:06:02.837 "bdev_lvol_set_parent", 00:06:02.837 "bdev_lvol_check_shallow_copy", 00:06:02.837 "bdev_lvol_start_shallow_copy", 00:06:02.837 "bdev_lvol_grow_lvstore", 00:06:02.837 "bdev_lvol_get_lvols", 00:06:02.837 "bdev_lvol_get_lvstores", 00:06:02.837 "bdev_lvol_delete", 00:06:02.837 "bdev_lvol_set_read_only", 00:06:02.837 "bdev_lvol_resize", 00:06:02.837 "bdev_lvol_decouple_parent", 00:06:02.837 "bdev_lvol_inflate", 00:06:02.837 "bdev_lvol_rename", 00:06:02.837 "bdev_lvol_clone_bdev", 00:06:02.837 "bdev_lvol_clone", 00:06:02.837 "bdev_lvol_snapshot", 00:06:02.837 "bdev_lvol_create", 00:06:02.837 "bdev_lvol_delete_lvstore", 00:06:02.837 "bdev_lvol_rename_lvstore", 00:06:02.837 "bdev_lvol_create_lvstore", 00:06:02.837 "bdev_raid_set_options", 00:06:02.837 "bdev_raid_remove_base_bdev", 00:06:02.837 "bdev_raid_add_base_bdev", 00:06:02.837 "bdev_raid_delete", 00:06:02.837 "bdev_raid_create", 00:06:02.837 "bdev_raid_get_bdevs", 00:06:02.837 "bdev_error_inject_error", 00:06:02.837 "bdev_error_delete", 00:06:02.837 "bdev_error_create", 00:06:02.837 "bdev_split_delete", 00:06:02.837 "bdev_split_create", 00:06:02.837 "bdev_delay_delete", 00:06:02.837 "bdev_delay_create", 00:06:02.837 "bdev_delay_update_latency", 00:06:02.837 "bdev_zone_block_delete", 00:06:02.837 "bdev_zone_block_create", 00:06:02.837 "blobfs_create", 00:06:02.837 "blobfs_detect", 00:06:02.837 "blobfs_set_cache_size", 00:06:02.837 "bdev_xnvme_delete", 00:06:02.837 "bdev_xnvme_create", 00:06:02.837 "bdev_aio_delete", 00:06:02.837 "bdev_aio_rescan", 00:06:02.837 "bdev_aio_create", 00:06:02.837 "bdev_ftl_set_property", 00:06:02.837 "bdev_ftl_get_properties", 00:06:02.837 "bdev_ftl_get_stats", 00:06:02.837 "bdev_ftl_unmap", 00:06:02.837 "bdev_ftl_unload", 00:06:02.837 "bdev_ftl_delete", 00:06:02.837 "bdev_ftl_load", 00:06:02.837 "bdev_ftl_create", 00:06:02.837 "bdev_virtio_attach_controller", 00:06:02.837 "bdev_virtio_scsi_get_devices", 00:06:02.837 "bdev_virtio_detach_controller", 00:06:02.837 "bdev_virtio_blk_set_hotplug", 00:06:02.837 "bdev_iscsi_delete", 00:06:02.837 "bdev_iscsi_create", 00:06:02.837 "bdev_iscsi_set_options", 00:06:02.837 "accel_error_inject_error", 00:06:02.837 "ioat_scan_accel_module", 00:06:02.837 "dsa_scan_accel_module", 00:06:02.837 "iaa_scan_accel_module", 00:06:02.837 "keyring_file_remove_key", 00:06:02.837 "keyring_file_add_key", 00:06:02.837 "keyring_linux_set_options", 00:06:02.837 "fsdev_aio_delete", 00:06:02.837 "fsdev_aio_create", 00:06:02.837 "iscsi_get_histogram", 00:06:02.837 "iscsi_enable_histogram", 00:06:02.837 "iscsi_set_options", 00:06:02.837 "iscsi_get_auth_groups", 00:06:02.837 "iscsi_auth_group_remove_secret", 00:06:02.837 "iscsi_auth_group_add_secret", 00:06:02.837 "iscsi_delete_auth_group", 00:06:02.837 "iscsi_create_auth_group", 00:06:02.837 "iscsi_set_discovery_auth", 00:06:02.837 "iscsi_get_options", 00:06:02.837 "iscsi_target_node_request_logout", 00:06:02.837 "iscsi_target_node_set_redirect", 00:06:02.837 "iscsi_target_node_set_auth", 00:06:02.837 "iscsi_target_node_add_lun", 00:06:02.837 "iscsi_get_stats", 00:06:02.837 "iscsi_get_connections", 00:06:02.837 "iscsi_portal_group_set_auth", 00:06:02.837 "iscsi_start_portal_group", 00:06:02.837 "iscsi_delete_portal_group", 00:06:02.837 "iscsi_create_portal_group", 00:06:02.837 "iscsi_get_portal_groups", 00:06:02.837 "iscsi_delete_target_node", 00:06:02.837 "iscsi_target_node_remove_pg_ig_maps", 00:06:02.837 "iscsi_target_node_add_pg_ig_maps", 00:06:02.837 "iscsi_create_target_node", 00:06:02.837 "iscsi_get_target_nodes", 00:06:02.837 "iscsi_delete_initiator_group", 00:06:02.837 "iscsi_initiator_group_remove_initiators", 00:06:02.837 "iscsi_initiator_group_add_initiators", 00:06:02.837 "iscsi_create_initiator_group", 00:06:02.837 "iscsi_get_initiator_groups", 00:06:02.837 "nvmf_set_crdt", 00:06:02.837 "nvmf_set_config", 00:06:02.837 "nvmf_set_max_subsystems", 00:06:02.837 "nvmf_stop_mdns_prr", 00:06:02.837 "nvmf_publish_mdns_prr", 00:06:02.837 "nvmf_subsystem_get_listeners", 00:06:02.837 "nvmf_subsystem_get_qpairs", 00:06:02.837 "nvmf_subsystem_get_controllers", 00:06:02.837 "nvmf_get_stats", 00:06:02.837 "nvmf_get_transports", 00:06:02.837 "nvmf_create_transport", 00:06:02.837 "nvmf_get_targets", 00:06:02.837 "nvmf_delete_target", 00:06:02.837 "nvmf_create_target", 00:06:02.837 "nvmf_subsystem_allow_any_host", 00:06:02.837 "nvmf_subsystem_set_keys", 00:06:02.837 "nvmf_subsystem_remove_host", 00:06:02.837 "nvmf_subsystem_add_host", 00:06:02.838 "nvmf_ns_remove_host", 00:06:02.838 "nvmf_ns_add_host", 00:06:02.838 "nvmf_subsystem_remove_ns", 00:06:02.838 "nvmf_subsystem_set_ns_ana_group", 00:06:02.838 "nvmf_subsystem_add_ns", 00:06:02.838 "nvmf_subsystem_listener_set_ana_state", 00:06:02.838 "nvmf_discovery_get_referrals", 00:06:02.838 "nvmf_discovery_remove_referral", 00:06:02.838 "nvmf_discovery_add_referral", 00:06:02.838 "nvmf_subsystem_remove_listener", 00:06:02.838 "nvmf_subsystem_add_listener", 00:06:02.838 "nvmf_delete_subsystem", 00:06:02.838 "nvmf_create_subsystem", 00:06:02.838 "nvmf_get_subsystems", 00:06:02.838 "env_dpdk_get_mem_stats", 00:06:02.838 "nbd_get_disks", 00:06:02.838 "nbd_stop_disk", 00:06:02.838 "nbd_start_disk", 00:06:02.838 "ublk_recover_disk", 00:06:02.838 "ublk_get_disks", 00:06:02.838 "ublk_stop_disk", 00:06:02.838 "ublk_start_disk", 00:06:02.838 "ublk_destroy_target", 00:06:02.838 "ublk_create_target", 00:06:02.838 "virtio_blk_create_transport", 00:06:02.838 "virtio_blk_get_transports", 00:06:02.838 "vhost_controller_set_coalescing", 00:06:02.838 "vhost_get_controllers", 00:06:02.838 "vhost_delete_controller", 00:06:02.838 "vhost_create_blk_controller", 00:06:02.838 "vhost_scsi_controller_remove_target", 00:06:02.838 "vhost_scsi_controller_add_target", 00:06:02.838 "vhost_start_scsi_controller", 00:06:02.838 "vhost_create_scsi_controller", 00:06:02.838 "thread_set_cpumask", 00:06:02.838 "scheduler_set_options", 00:06:02.838 "framework_get_governor", 00:06:02.838 "framework_get_scheduler", 00:06:02.838 "framework_set_scheduler", 00:06:02.838 "framework_get_reactors", 00:06:02.838 "thread_get_io_channels", 00:06:02.838 "thread_get_pollers", 00:06:02.838 "thread_get_stats", 00:06:02.838 "framework_monitor_context_switch", 00:06:02.838 "spdk_kill_instance", 00:06:02.838 "log_enable_timestamps", 00:06:02.838 "log_get_flags", 00:06:02.838 "log_clear_flag", 00:06:02.838 "log_set_flag", 00:06:02.838 "log_get_level", 00:06:02.838 "log_set_level", 00:06:02.838 "log_get_print_level", 00:06:02.838 "log_set_print_level", 00:06:02.838 "framework_enable_cpumask_locks", 00:06:02.838 "framework_disable_cpumask_locks", 00:06:02.838 "framework_wait_init", 00:06:02.838 "framework_start_init", 00:06:02.838 "scsi_get_devices", 00:06:02.838 "bdev_get_histogram", 00:06:02.838 "bdev_enable_histogram", 00:06:02.838 "bdev_set_qos_limit", 00:06:02.838 "bdev_set_qd_sampling_period", 00:06:02.838 "bdev_get_bdevs", 00:06:02.838 "bdev_reset_iostat", 00:06:02.838 "bdev_get_iostat", 00:06:02.838 "bdev_examine", 00:06:02.838 "bdev_wait_for_examine", 00:06:02.838 "bdev_set_options", 00:06:02.838 "accel_get_stats", 00:06:02.838 "accel_set_options", 00:06:02.838 "accel_set_driver", 00:06:02.838 "accel_crypto_key_destroy", 00:06:02.838 "accel_crypto_keys_get", 00:06:02.838 "accel_crypto_key_create", 00:06:02.838 "accel_assign_opc", 00:06:02.838 "accel_get_module_info", 00:06:02.838 "accel_get_opc_assignments", 00:06:02.838 "vmd_rescan", 00:06:02.838 "vmd_remove_device", 00:06:02.838 "vmd_enable", 00:06:02.838 "sock_get_default_impl", 00:06:02.838 "sock_set_default_impl", 00:06:02.838 "sock_impl_set_options", 00:06:02.838 "sock_impl_get_options", 00:06:02.838 "iobuf_get_stats", 00:06:02.838 "iobuf_set_options", 00:06:02.838 "keyring_get_keys", 00:06:02.838 "framework_get_pci_devices", 00:06:02.838 "framework_get_config", 00:06:02.838 "framework_get_subsystems", 00:06:02.838 "fsdev_set_opts", 00:06:02.838 "fsdev_get_opts", 00:06:02.838 "trace_get_info", 00:06:02.838 "trace_get_tpoint_group_mask", 00:06:02.838 "trace_disable_tpoint_group", 00:06:02.838 "trace_enable_tpoint_group", 00:06:02.838 "trace_clear_tpoint_mask", 00:06:02.838 "trace_set_tpoint_mask", 00:06:02.838 "notify_get_notifications", 00:06:02.838 "notify_get_types", 00:06:02.838 "spdk_get_version", 00:06:02.838 "rpc_get_methods" 00:06:02.838 ] 00:06:02.838 22:50:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.838 22:50:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:02.838 22:50:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71721 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71721 ']' 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71721 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71721 00:06:02.838 killing process with pid 71721 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71721' 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71721 00:06:02.838 22:50:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71721 00:06:03.099 ************************************ 00:06:03.099 END TEST spdkcli_tcp 00:06:03.099 ************************************ 00:06:03.099 00:06:03.099 real 0m1.602s 00:06:03.099 user 0m2.722s 00:06:03.099 sys 0m0.470s 00:06:03.099 22:50:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.099 22:50:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.359 22:50:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.359 22:50:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.359 22:50:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.359 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.359 ************************************ 00:06:03.359 START TEST dpdk_mem_utility 00:06:03.359 ************************************ 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.359 * Looking for test storage... 00:06:03.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.359 22:50:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.359 --rc genhtml_branch_coverage=1 00:06:03.359 --rc genhtml_function_coverage=1 00:06:03.359 --rc genhtml_legend=1 00:06:03.359 --rc geninfo_all_blocks=1 00:06:03.359 --rc geninfo_unexecuted_blocks=1 00:06:03.359 00:06:03.359 ' 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.359 --rc genhtml_branch_coverage=1 00:06:03.359 --rc genhtml_function_coverage=1 00:06:03.359 --rc genhtml_legend=1 00:06:03.359 --rc geninfo_all_blocks=1 00:06:03.359 --rc geninfo_unexecuted_blocks=1 00:06:03.359 00:06:03.359 ' 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.359 --rc genhtml_branch_coverage=1 00:06:03.359 --rc genhtml_function_coverage=1 00:06:03.359 --rc genhtml_legend=1 00:06:03.359 --rc geninfo_all_blocks=1 00:06:03.359 --rc geninfo_unexecuted_blocks=1 00:06:03.359 00:06:03.359 ' 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.359 --rc genhtml_branch_coverage=1 00:06:03.359 --rc genhtml_function_coverage=1 00:06:03.359 --rc genhtml_legend=1 00:06:03.359 --rc geninfo_all_blocks=1 00:06:03.359 --rc geninfo_unexecuted_blocks=1 00:06:03.359 00:06:03.359 ' 00:06:03.359 22:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:03.359 22:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71815 00:06:03.359 22:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71815 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71815 ']' 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.359 22:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.359 22:50:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.619 [2024-11-26 22:50:42.497419] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:03.619 [2024-11-26 22:50:42.497679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71815 ] 00:06:03.619 [2024-11-26 22:50:42.629748] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.619 [2024-11-26 22:50:42.658583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.619 [2024-11-26 22:50:42.683092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.560 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.560 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:04.560 22:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.560 22:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.560 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.560 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.560 { 00:06:04.560 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.560 } 00:06:04.560 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.560 22:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.560 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:04.560 1 heaps totaling size 818.000000 MiB 00:06:04.560 size: 818.000000 MiB heap id: 0 00:06:04.560 end heaps---------- 00:06:04.560 9 mempools totaling size 603.782043 MiB 00:06:04.560 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.560 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.560 size: 100.555481 MiB name: bdev_io_71815 00:06:04.560 size: 50.003479 MiB name: msgpool_71815 00:06:04.560 size: 36.509338 MiB name: fsdev_io_71815 00:06:04.560 size: 21.763794 MiB name: PDU_Pool 00:06:04.560 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.560 size: 4.133484 MiB name: evtpool_71815 00:06:04.560 size: 0.026123 MiB name: Session_Pool 00:06:04.560 end mempools------- 00:06:04.560 6 memzones totaling size 4.142822 MiB 00:06:04.560 size: 1.000366 MiB name: RG_ring_0_71815 00:06:04.560 size: 1.000366 MiB name: RG_ring_1_71815 00:06:04.560 size: 1.000366 MiB name: RG_ring_4_71815 00:06:04.560 size: 1.000366 MiB name: RG_ring_5_71815 00:06:04.560 size: 0.125366 MiB name: RG_ring_2_71815 00:06:04.560 size: 0.015991 MiB name: RG_ring_3_71815 00:06:04.560 end memzones------- 00:06:04.560 22:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:04.560 heap id: 0 total size: 818.000000 MiB number of busy elements: 316 number of free elements: 15 00:06:04.560 list of free elements. size: 10.943237 MiB 00:06:04.560 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:04.560 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:04.560 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:04.560 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:04.560 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:04.560 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:04.560 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:04.560 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:04.560 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:06:04.560 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:04.560 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:04.560 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:04.560 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:04.560 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:04.560 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:04.560 list of standard malloc elements. size: 199.127869 MiB 00:06:04.560 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:04.560 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:04.560 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:04.560 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:04.560 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:04.560 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:04.560 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:04.560 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:04.560 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:04.560 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:04.560 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:04.560 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:04.560 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:04.561 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:04.561 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:04.561 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:04.561 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:04.562 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:04.562 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:04.562 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:04.562 list of memzone associated elements. size: 607.928894 MiB 00:06:04.562 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:04.562 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:04.562 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:04.562 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:04.562 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:04.562 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71815_0 00:06:04.562 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:04.562 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71815_0 00:06:04.562 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:04.562 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71815_0 00:06:04.562 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:04.562 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:04.562 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:04.562 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:04.562 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:04.562 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71815_0 00:06:04.562 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:04.562 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71815 00:06:04.562 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:04.562 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71815 00:06:04.562 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:04.562 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:04.562 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:04.562 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:04.562 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:04.562 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:04.562 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:04.562 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:04.562 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:04.562 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71815 00:06:04.562 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:04.562 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71815 00:06:04.562 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:04.562 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71815 00:06:04.563 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:04.563 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71815 00:06:04.563 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:04.563 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71815 00:06:04.563 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:04.563 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71815 00:06:04.563 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:04.563 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:04.563 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:04.563 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:04.563 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:04.563 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:04.563 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:04.563 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71815 00:06:04.563 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:04.563 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71815 00:06:04.563 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:04.563 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:04.563 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:04.563 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:04.563 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:04.563 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71815 00:06:04.563 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:04.563 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:04.563 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:04.563 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71815 00:06:04.563 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:04.563 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71815 00:06:04.563 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:04.563 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71815 00:06:04.563 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:04.563 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:04.563 22:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:04.563 22:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71815 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71815 ']' 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71815 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71815 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71815' 00:06:04.563 killing process with pid 71815 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71815 00:06:04.563 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71815 00:06:04.823 00:06:04.823 real 0m1.526s 00:06:04.823 user 0m1.523s 00:06:04.823 sys 0m0.413s 00:06:04.823 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.823 22:50:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.823 ************************************ 00:06:04.823 END TEST dpdk_mem_utility 00:06:04.823 ************************************ 00:06:04.823 22:50:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:04.823 22:50:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.823 22:50:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.823 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.823 ************************************ 00:06:04.823 START TEST event 00:06:04.823 ************************************ 00:06:04.823 22:50:43 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:04.823 * Looking for test storage... 00:06:04.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.823 22:50:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.823 22:50:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.823 22:50:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.083 22:50:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.083 22:50:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.083 22:50:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.083 22:50:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.083 22:50:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.083 22:50:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.083 22:50:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.083 22:50:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.083 22:50:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.083 22:50:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.083 22:50:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.083 22:50:43 event -- scripts/common.sh@344 -- # case "$op" in 00:06:05.083 22:50:43 event -- scripts/common.sh@345 -- # : 1 00:06:05.083 22:50:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.083 22:50:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.083 22:50:43 event -- scripts/common.sh@365 -- # decimal 1 00:06:05.083 22:50:43 event -- scripts/common.sh@353 -- # local d=1 00:06:05.083 22:50:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.083 22:50:43 event -- scripts/common.sh@355 -- # echo 1 00:06:05.083 22:50:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.083 22:50:43 event -- scripts/common.sh@366 -- # decimal 2 00:06:05.083 22:50:43 event -- scripts/common.sh@353 -- # local d=2 00:06:05.083 22:50:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.083 22:50:43 event -- scripts/common.sh@355 -- # echo 2 00:06:05.083 22:50:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.083 22:50:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.083 22:50:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.083 22:50:43 event -- scripts/common.sh@368 -- # return 0 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.083 --rc genhtml_branch_coverage=1 00:06:05.083 --rc genhtml_function_coverage=1 00:06:05.083 --rc genhtml_legend=1 00:06:05.083 --rc geninfo_all_blocks=1 00:06:05.083 --rc geninfo_unexecuted_blocks=1 00:06:05.083 00:06:05.083 ' 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.083 --rc genhtml_branch_coverage=1 00:06:05.083 --rc genhtml_function_coverage=1 00:06:05.083 --rc genhtml_legend=1 00:06:05.083 --rc geninfo_all_blocks=1 00:06:05.083 --rc geninfo_unexecuted_blocks=1 00:06:05.083 00:06:05.083 ' 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.083 --rc genhtml_branch_coverage=1 00:06:05.083 --rc genhtml_function_coverage=1 00:06:05.083 --rc genhtml_legend=1 00:06:05.083 --rc geninfo_all_blocks=1 00:06:05.083 --rc geninfo_unexecuted_blocks=1 00:06:05.083 00:06:05.083 ' 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.083 --rc genhtml_branch_coverage=1 00:06:05.083 --rc genhtml_function_coverage=1 00:06:05.083 --rc genhtml_legend=1 00:06:05.083 --rc geninfo_all_blocks=1 00:06:05.083 --rc geninfo_unexecuted_blocks=1 00:06:05.083 00:06:05.083 ' 00:06:05.083 22:50:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:05.083 22:50:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.083 22:50:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:05.083 22:50:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.083 22:50:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.083 ************************************ 00:06:05.083 START TEST event_perf 00:06:05.083 ************************************ 00:06:05.083 22:50:43 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.084 Running I/O for 1 seconds...[2024-11-26 22:50:44.025724] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:05.084 [2024-11-26 22:50:44.025930] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71896 ] 00:06:05.084 [2024-11-26 22:50:44.157968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.084 [2024-11-26 22:50:44.187093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.344 [2024-11-26 22:50:44.214821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.344 [2024-11-26 22:50:44.215559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.344 Running I/O for 1 seconds...[2024-11-26 22:50:44.215992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.344 [2024-11-26 22:50:44.216034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.334 00:06:06.334 lcore 0: 149713 00:06:06.334 lcore 1: 149714 00:06:06.334 lcore 2: 149711 00:06:06.334 lcore 3: 149712 00:06:06.334 done. 00:06:06.334 00:06:06.334 real 0m1.279s 00:06:06.334 user 0m4.077s 00:06:06.334 sys 0m0.081s 00:06:06.334 22:50:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.334 22:50:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.334 ************************************ 00:06:06.334 END TEST event_perf 00:06:06.334 ************************************ 00:06:06.334 22:50:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:06.334 22:50:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:06.334 22:50:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.334 22:50:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.334 ************************************ 00:06:06.334 START TEST event_reactor 00:06:06.334 ************************************ 00:06:06.335 22:50:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:06.335 [2024-11-26 22:50:45.348933] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:06.335 [2024-11-26 22:50:45.349194] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71935 ] 00:06:06.613 [2024-11-26 22:50:45.479441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.613 [2024-11-26 22:50:45.509761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.613 [2024-11-26 22:50:45.533599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.556 test_start 00:06:07.556 oneshot 00:06:07.556 tick 100 00:06:07.556 tick 100 00:06:07.556 tick 250 00:06:07.556 tick 100 00:06:07.556 tick 100 00:06:07.556 tick 100 00:06:07.556 tick 250 00:06:07.556 tick 500 00:06:07.556 tick 100 00:06:07.556 tick 100 00:06:07.556 tick 250 00:06:07.556 tick 100 00:06:07.556 tick 100 00:06:07.556 test_end 00:06:07.556 ************************************ 00:06:07.556 END TEST event_reactor 00:06:07.556 ************************************ 00:06:07.556 00:06:07.556 real 0m1.271s 00:06:07.556 user 0m1.094s 00:06:07.556 sys 0m0.069s 00:06:07.556 22:50:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.556 22:50:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:07.556 22:50:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.556 22:50:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:07.556 22:50:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.556 22:50:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.556 ************************************ 00:06:07.556 START TEST event_reactor_perf 00:06:07.556 ************************************ 00:06:07.557 22:50:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.557 [2024-11-26 22:50:46.674796] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:07.557 [2024-11-26 22:50:46.674923] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:06:07.817 [2024-11-26 22:50:46.805348] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.817 [2024-11-26 22:50:46.837090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.817 [2024-11-26 22:50:46.860849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.204 test_start 00:06:09.204 test_end 00:06:09.204 Performance: 316406 events per second 00:06:09.204 00:06:09.204 real 0m1.265s 00:06:09.204 user 0m1.088s 00:06:09.204 sys 0m0.069s 00:06:09.204 22:50:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.204 22:50:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 ************************************ 00:06:09.204 END TEST event_reactor_perf 00:06:09.204 ************************************ 00:06:09.204 22:50:47 event -- event/event.sh@49 -- # uname -s 00:06:09.204 22:50:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:09.204 22:50:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:09.204 22:50:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.204 22:50:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.204 22:50:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.204 ************************************ 00:06:09.204 START TEST event_scheduler 00:06:09.204 ************************************ 00:06:09.204 22:50:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:09.204 * Looking for test storage... 00:06:09.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:09.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.204 22:50:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.204 --rc genhtml_branch_coverage=1 00:06:09.204 --rc genhtml_function_coverage=1 00:06:09.204 --rc genhtml_legend=1 00:06:09.204 --rc geninfo_all_blocks=1 00:06:09.204 --rc geninfo_unexecuted_blocks=1 00:06:09.204 00:06:09.204 ' 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.204 --rc genhtml_branch_coverage=1 00:06:09.204 --rc genhtml_function_coverage=1 00:06:09.204 --rc genhtml_legend=1 00:06:09.204 --rc geninfo_all_blocks=1 00:06:09.204 --rc geninfo_unexecuted_blocks=1 00:06:09.204 00:06:09.204 ' 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.204 --rc genhtml_branch_coverage=1 00:06:09.204 --rc genhtml_function_coverage=1 00:06:09.204 --rc genhtml_legend=1 00:06:09.204 --rc geninfo_all_blocks=1 00:06:09.204 --rc geninfo_unexecuted_blocks=1 00:06:09.204 00:06:09.204 ' 00:06:09.204 22:50:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.204 --rc genhtml_branch_coverage=1 00:06:09.204 --rc genhtml_function_coverage=1 00:06:09.204 --rc genhtml_legend=1 00:06:09.204 --rc geninfo_all_blocks=1 00:06:09.204 --rc geninfo_unexecuted_blocks=1 00:06:09.204 00:06:09.204 ' 00:06:09.204 22:50:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:09.204 22:50:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72037 00:06:09.204 22:50:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.205 22:50:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72037 00:06:09.205 22:50:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72037 ']' 00:06:09.205 22:50:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.205 22:50:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.205 22:50:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.205 22:50:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.205 22:50:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.205 22:50:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:09.205 [2024-11-26 22:50:48.161805] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:09.205 [2024-11-26 22:50:48.162050] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72037 ] 00:06:09.205 [2024-11-26 22:50:48.296531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.205 [2024-11-26 22:50:48.319162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.466 [2024-11-26 22:50:48.346455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.466 [2024-11-26 22:50:48.346755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.466 [2024-11-26 22:50:48.346976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.466 [2024-11-26 22:50:48.347083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.038 22:50:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.038 22:50:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:10.038 22:50:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:10.039 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:10.039 POWER: intel_pstate driver is not supported 00:06:10.039 POWER: cppc_cpufreq driver is not supported 00:06:10.039 POWER: amd-pstate driver is not supported 00:06:10.039 POWER: acpi-cpufreq driver is not supported 00:06:10.039 POWER: Unable to set Power Management Environment for lcore 0 00:06:10.039 [2024-11-26 22:50:49.012683] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:10.039 [2024-11-26 22:50:49.012716] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:10.039 [2024-11-26 22:50:49.012729] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.039 [2024-11-26 22:50:49.012745] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.039 [2024-11-26 22:50:49.012757] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.039 [2024-11-26 22:50:49.012776] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 [2024-11-26 22:50:49.084960] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 ************************************ 00:06:10.039 START TEST scheduler_create_thread 00:06:10.039 ************************************ 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 2 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 3 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 4 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 5 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 6 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 7 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.039 8 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.039 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.299 9 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.299 10 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.299 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 ************************************ 00:06:10.868 END TEST scheduler_create_thread 00:06:10.868 ************************************ 00:06:10.868 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 00:06:10.868 real 0m0.592s 00:06:10.868 user 0m0.014s 00:06:10.868 sys 0m0.004s 00:06:10.868 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.868 22:50:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 22:50:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.868 22:50:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72037 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72037 ']' 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72037 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72037 00:06:10.868 killing process with pid 72037 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72037' 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72037 00:06:10.868 22:50:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72037 00:06:11.126 [2024-11-26 22:50:50.166820] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.385 ************************************ 00:06:11.385 END TEST event_scheduler 00:06:11.385 ************************************ 00:06:11.385 00:06:11.385 real 0m2.367s 00:06:11.385 user 0m4.572s 00:06:11.385 sys 0m0.366s 00:06:11.385 22:50:50 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.385 22:50:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 22:50:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.385 22:50:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.385 22:50:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.385 22:50:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.385 22:50:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 ************************************ 00:06:11.385 START TEST app_repeat 00:06:11.385 ************************************ 00:06:11.385 22:50:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.385 Process app_repeat pid: 72115 00:06:11.385 spdk_app_start Round 0 00:06:11.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72115 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72115' 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72115 /var/tmp/spdk-nbd.sock 00:06:11.385 22:50:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72115 ']' 00:06:11.385 22:50:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.385 22:50:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.385 22:50:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.385 22:50:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.385 22:50:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.385 22:50:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 [2024-11-26 22:50:50.419181] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:11.385 [2024-11-26 22:50:50.419537] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72115 ] 00:06:11.645 [2024-11-26 22:50:50.550250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.645 [2024-11-26 22:50:50.577831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.645 [2024-11-26 22:50:50.597129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.645 [2024-11-26 22:50:50.597221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.216 22:50:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.216 22:50:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.216 22:50:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.476 Malloc0 00:06:12.476 22:50:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.733 Malloc1 00:06:12.733 22:50:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.733 22:50:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.992 /dev/nbd0 00:06:12.992 22:50:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.992 22:50:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.992 1+0 records in 00:06:12.992 1+0 records out 00:06:12.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346038 s, 11.8 MB/s 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.992 22:50:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.992 22:50:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.992 22:50:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.992 22:50:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.252 /dev/nbd1 00:06:13.252 22:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.252 22:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.252 1+0 records in 00:06:13.252 1+0 records out 00:06:13.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287578 s, 14.2 MB/s 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.252 22:50:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.252 22:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.252 22:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.252 22:50:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.252 22:50:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.252 22:50:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.512 { 00:06:13.512 "nbd_device": "/dev/nbd0", 00:06:13.512 "bdev_name": "Malloc0" 00:06:13.512 }, 00:06:13.512 { 00:06:13.512 "nbd_device": "/dev/nbd1", 00:06:13.512 "bdev_name": "Malloc1" 00:06:13.512 } 00:06:13.512 ]' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.512 { 00:06:13.512 "nbd_device": "/dev/nbd0", 00:06:13.512 "bdev_name": "Malloc0" 00:06:13.512 }, 00:06:13.512 { 00:06:13.512 "nbd_device": "/dev/nbd1", 00:06:13.512 "bdev_name": "Malloc1" 00:06:13.512 } 00:06:13.512 ]' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.512 /dev/nbd1' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.512 /dev/nbd1' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.512 256+0 records in 00:06:13.512 256+0 records out 00:06:13.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709697 s, 148 MB/s 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.512 256+0 records in 00:06:13.512 256+0 records out 00:06:13.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254582 s, 41.2 MB/s 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.512 256+0 records in 00:06:13.512 256+0 records out 00:06:13.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275069 s, 38.1 MB/s 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.512 22:50:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.772 22:50:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.033 22:50:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.033 22:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.033 22:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.033 22:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.293 22:50:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.293 22:50:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.553 22:50:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.553 [2024-11-26 22:50:53.509975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.553 [2024-11-26 22:50:53.528933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.553 [2024-11-26 22:50:53.529091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.553 [2024-11-26 22:50:53.562286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.553 [2024-11-26 22:50:53.562365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.866 spdk_app_start Round 1 00:06:17.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.866 22:50:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.866 22:50:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.866 22:50:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72115 /var/tmp/spdk-nbd.sock 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72115 ']' 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.866 22:50:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.866 22:50:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.866 Malloc0 00:06:17.866 22:50:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.126 Malloc1 00:06:18.126 22:50:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.126 22:50:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.127 22:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.127 22:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.127 22:50:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.387 /dev/nbd0 00:06:18.387 22:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.387 22:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.387 1+0 records in 00:06:18.387 1+0 records out 00:06:18.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295066 s, 13.9 MB/s 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.387 22:50:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.387 22:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.387 22:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.387 22:50:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.648 /dev/nbd1 00:06:18.648 22:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.648 22:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.648 22:50:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.648 22:50:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.649 1+0 records in 00:06:18.649 1+0 records out 00:06:18.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240841 s, 17.0 MB/s 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.649 22:50:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.649 22:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.649 22:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.649 22:50:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.649 22:50:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.649 22:50:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.649 22:50:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.649 { 00:06:18.649 "nbd_device": "/dev/nbd0", 00:06:18.649 "bdev_name": "Malloc0" 00:06:18.649 }, 00:06:18.649 { 00:06:18.649 "nbd_device": "/dev/nbd1", 00:06:18.649 "bdev_name": "Malloc1" 00:06:18.649 } 00:06:18.649 ]' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.910 { 00:06:18.910 "nbd_device": "/dev/nbd0", 00:06:18.910 "bdev_name": "Malloc0" 00:06:18.910 }, 00:06:18.910 { 00:06:18.910 "nbd_device": "/dev/nbd1", 00:06:18.910 "bdev_name": "Malloc1" 00:06:18.910 } 00:06:18.910 ]' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.910 /dev/nbd1' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.910 /dev/nbd1' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.910 256+0 records in 00:06:18.910 256+0 records out 00:06:18.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00934317 s, 112 MB/s 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.910 256+0 records in 00:06:18.910 256+0 records out 00:06:18.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157826 s, 66.4 MB/s 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.910 256+0 records in 00:06:18.910 256+0 records out 00:06:18.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190912 s, 54.9 MB/s 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.910 22:50:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.911 22:50:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.911 22:50:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.911 22:50:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.911 22:50:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.911 22:50:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.911 22:50:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.911 22:50:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.172 22:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.432 22:50:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.432 22:50:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.692 22:50:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.692 [2024-11-26 22:50:58.801692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.952 [2024-11-26 22:50:58.825568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.952 [2024-11-26 22:50:58.825669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.952 [2024-11-26 22:50:58.871376] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.952 [2024-11-26 22:50:58.871431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.237 spdk_app_start Round 2 00:06:23.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.237 22:51:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.237 22:51:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:23.237 22:51:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72115 /var/tmp/spdk-nbd.sock 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72115 ']' 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.237 22:51:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.237 22:51:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.237 Malloc0 00:06:23.237 22:51:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.237 Malloc1 00:06:23.237 22:51:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.237 22:51:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.496 /dev/nbd0 00:06:23.496 22:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.496 22:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.496 1+0 records in 00:06:23.496 1+0 records out 00:06:23.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513418 s, 8.0 MB/s 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.496 22:51:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.496 22:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.496 22:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.496 22:51:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.755 /dev/nbd1 00:06:23.755 22:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.755 22:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.755 1+0 records in 00:06:23.755 1+0 records out 00:06:23.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258821 s, 15.8 MB/s 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.755 22:51:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.755 22:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.755 22:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.755 22:51:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.755 22:51:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.755 22:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.015 22:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.015 { 00:06:24.015 "nbd_device": "/dev/nbd0", 00:06:24.015 "bdev_name": "Malloc0" 00:06:24.015 }, 00:06:24.015 { 00:06:24.015 "nbd_device": "/dev/nbd1", 00:06:24.015 "bdev_name": "Malloc1" 00:06:24.015 } 00:06:24.015 ]' 00:06:24.015 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.015 22:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.015 { 00:06:24.015 "nbd_device": "/dev/nbd0", 00:06:24.015 "bdev_name": "Malloc0" 00:06:24.015 }, 00:06:24.015 { 00:06:24.015 "nbd_device": "/dev/nbd1", 00:06:24.015 "bdev_name": "Malloc1" 00:06:24.015 } 00:06:24.015 ]' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.015 /dev/nbd1' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.015 /dev/nbd1' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.015 256+0 records in 00:06:24.015 256+0 records out 00:06:24.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00909062 s, 115 MB/s 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.015 256+0 records in 00:06:24.015 256+0 records out 00:06:24.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190548 s, 55.0 MB/s 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.015 256+0 records in 00:06:24.015 256+0 records out 00:06:24.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020696 s, 50.7 MB/s 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.015 22:51:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.280 22:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.280 22:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.280 22:51:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.280 22:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.281 22:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.281 22:51:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.281 22:51:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.281 22:51:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.281 22:51:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.281 22:51:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.543 22:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.804 22:51:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.804 22:51:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.063 22:51:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.063 [2024-11-26 22:51:04.052701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.063 [2024-11-26 22:51:04.071599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.063 [2024-11-26 22:51:04.071712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.063 [2024-11-26 22:51:04.104596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.063 [2024-11-26 22:51:04.104647] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.393 22:51:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72115 /var/tmp/spdk-nbd.sock 00:06:28.393 22:51:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72115 ']' 00:06:28.393 22:51:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.393 22:51:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.393 22:51:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.393 22:51:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.393 22:51:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.393 22:51:07 event.app_repeat -- event/event.sh@39 -- # killprocess 72115 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72115 ']' 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72115 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72115 00:06:28.393 killing process with pid 72115 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72115' 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72115 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72115 00:06:28.393 spdk_app_start is called in Round 0. 00:06:28.393 Shutdown signal received, stop current app iteration 00:06:28.393 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 reinitialization... 00:06:28.393 spdk_app_start is called in Round 1. 00:06:28.393 Shutdown signal received, stop current app iteration 00:06:28.393 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 reinitialization... 00:06:28.393 spdk_app_start is called in Round 2. 00:06:28.393 Shutdown signal received, stop current app iteration 00:06:28.393 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 reinitialization... 00:06:28.393 spdk_app_start is called in Round 3. 00:06:28.393 Shutdown signal received, stop current app iteration 00:06:28.393 22:51:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:28.393 22:51:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:28.393 00:06:28.393 real 0m16.944s 00:06:28.393 user 0m37.741s 00:06:28.393 sys 0m2.150s 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.393 22:51:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 ************************************ 00:06:28.393 END TEST app_repeat 00:06:28.393 ************************************ 00:06:28.393 22:51:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:28.393 22:51:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:28.393 22:51:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.393 22:51:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.393 22:51:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 ************************************ 00:06:28.393 START TEST cpu_locks 00:06:28.393 ************************************ 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:28.393 * Looking for test storage... 00:06:28.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.393 22:51:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.393 --rc genhtml_branch_coverage=1 00:06:28.393 --rc genhtml_function_coverage=1 00:06:28.393 --rc genhtml_legend=1 00:06:28.393 --rc geninfo_all_blocks=1 00:06:28.393 --rc geninfo_unexecuted_blocks=1 00:06:28.393 00:06:28.393 ' 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.393 --rc genhtml_branch_coverage=1 00:06:28.393 --rc genhtml_function_coverage=1 00:06:28.393 --rc genhtml_legend=1 00:06:28.393 --rc geninfo_all_blocks=1 00:06:28.393 --rc geninfo_unexecuted_blocks=1 00:06:28.393 00:06:28.393 ' 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.393 --rc genhtml_branch_coverage=1 00:06:28.393 --rc genhtml_function_coverage=1 00:06:28.393 --rc genhtml_legend=1 00:06:28.393 --rc geninfo_all_blocks=1 00:06:28.393 --rc geninfo_unexecuted_blocks=1 00:06:28.393 00:06:28.393 ' 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.393 --rc genhtml_branch_coverage=1 00:06:28.393 --rc genhtml_function_coverage=1 00:06:28.393 --rc genhtml_legend=1 00:06:28.393 --rc geninfo_all_blocks=1 00:06:28.393 --rc geninfo_unexecuted_blocks=1 00:06:28.393 00:06:28.393 ' 00:06:28.393 22:51:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:28.393 22:51:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:28.393 22:51:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:28.393 22:51:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.393 22:51:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 ************************************ 00:06:28.393 START TEST default_locks 00:06:28.393 ************************************ 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72535 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72535 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72535 ']' 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.393 22:51:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.394 22:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.394 22:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.655 [2024-11-26 22:51:07.589225] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:28.655 [2024-11-26 22:51:07.589358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72535 ] 00:06:28.655 [2024-11-26 22:51:07.721127] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.655 [2024-11-26 22:51:07.751910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.655 [2024-11-26 22:51:07.777105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72535 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72535 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72535 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72535 ']' 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72535 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72535 00:06:29.596 killing process with pid 72535 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72535' 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72535 00:06:29.596 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72535 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72535 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72535 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72535 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72535 ']' 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.856 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.857 ERROR: process (pid: 72535) is no longer running 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.857 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72535) - No such process 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.857 ************************************ 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.857 00:06:29.857 real 0m1.320s 00:06:29.857 user 0m1.310s 00:06:29.857 sys 0m0.400s 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.857 22:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.857 END TEST default_locks 00:06:29.857 ************************************ 00:06:29.857 22:51:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.857 22:51:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.857 22:51:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.857 22:51:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.857 ************************************ 00:06:29.857 START TEST default_locks_via_rpc 00:06:29.857 ************************************ 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72581 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72581 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72581 ']' 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.857 22:51:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.857 [2024-11-26 22:51:08.964552] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:29.857 [2024-11-26 22:51:08.964684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72581 ] 00:06:30.119 [2024-11-26 22:51:09.097747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.119 [2024-11-26 22:51:09.125334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.119 [2024-11-26 22:51:09.149645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.689 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.689 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.689 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:30.689 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.689 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72581 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.950 22:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72581 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72581 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72581 ']' 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72581 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72581 00:06:30.950 killing process with pid 72581 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72581' 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72581 00:06:30.950 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72581 00:06:31.522 ************************************ 00:06:31.522 END TEST default_locks_via_rpc 00:06:31.522 ************************************ 00:06:31.522 00:06:31.522 real 0m1.496s 00:06:31.522 user 0m1.481s 00:06:31.522 sys 0m0.478s 00:06:31.522 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.522 22:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.522 22:51:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.522 22:51:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.522 22:51:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.522 22:51:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.522 ************************************ 00:06:31.522 START TEST non_locking_app_on_locked_coremask 00:06:31.522 ************************************ 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:31.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72629 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72629 /var/tmp/spdk.sock 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72629 ']' 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.522 22:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.522 [2024-11-26 22:51:10.509123] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:31.522 [2024-11-26 22:51:10.509233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:06:31.522 [2024-11-26 22:51:10.640675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.782 [2024-11-26 22:51:10.663380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.782 [2024-11-26 22:51:10.688229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72645 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72645 /var/tmp/spdk2.sock 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72645 ']' 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.349 22:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.349 [2024-11-26 22:51:11.421885] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:32.349 [2024-11-26 22:51:11.422797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72645 ] 00:06:32.608 [2024-11-26 22:51:11.568353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.608 [2024-11-26 22:51:11.609233] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.608 [2024-11-26 22:51:11.615333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.608 [2024-11-26 22:51:11.664000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.175 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.175 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.175 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72629 00:06:33.175 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.175 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72629 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72629 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72629 ']' 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72629 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72629 00:06:33.741 killing process with pid 72629 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72629' 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72629 00:06:33.741 22:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72629 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72645 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72645 ']' 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72645 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72645 00:06:34.307 killing process with pid 72645 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72645' 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72645 00:06:34.307 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72645 00:06:34.566 00:06:34.566 real 0m3.093s 00:06:34.566 user 0m3.329s 00:06:34.566 sys 0m0.885s 00:06:34.566 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.566 ************************************ 00:06:34.566 END TEST non_locking_app_on_locked_coremask 00:06:34.566 ************************************ 00:06:34.566 22:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.566 22:51:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:34.566 22:51:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.566 22:51:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.566 22:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.566 ************************************ 00:06:34.566 START TEST locking_app_on_unlocked_coremask 00:06:34.566 ************************************ 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72703 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72703 /var/tmp/spdk.sock 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:34.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72703 ']' 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.566 22:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.566 [2024-11-26 22:51:13.639506] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:34.566 [2024-11-26 22:51:13.639632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72703 ] 00:06:34.825 [2024-11-26 22:51:13.772735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.825 [2024-11-26 22:51:13.797892] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.825 [2024-11-26 22:51:13.798121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.825 [2024-11-26 22:51:13.821331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72719 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72719 /var/tmp/spdk2.sock 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72719 ']' 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.394 22:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.665 [2024-11-26 22:51:14.543289] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:35.665 [2024-11-26 22:51:14.543840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72719 ] 00:06:35.665 [2024-11-26 22:51:14.681699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.665 [2024-11-26 22:51:14.711003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.665 [2024-11-26 22:51:14.758011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.600 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.600 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.600 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72719 00:06:36.600 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.600 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72719 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72703 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72703 ']' 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72703 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72703 00:06:36.859 killing process with pid 72703 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72703' 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72703 00:06:36.859 22:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72703 00:06:37.425 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72719 00:06:37.425 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72719 ']' 00:06:37.425 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72719 00:06:37.425 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.425 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.425 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72719 00:06:37.425 killing process with pid 72719 00:06:37.426 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.426 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.426 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72719' 00:06:37.426 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72719 00:06:37.426 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72719 00:06:37.685 ************************************ 00:06:37.685 END TEST locking_app_on_unlocked_coremask 00:06:37.685 00:06:37.685 real 0m3.090s 00:06:37.685 user 0m3.416s 00:06:37.685 sys 0m0.832s 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.685 ************************************ 00:06:37.685 22:51:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.685 22:51:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.685 22:51:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.685 22:51:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.685 ************************************ 00:06:37.685 START TEST locking_app_on_locked_coremask 00:06:37.685 ************************************ 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72777 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72777 /var/tmp/spdk.sock 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72777 ']' 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.685 22:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.685 [2024-11-26 22:51:16.776839] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:37.685 [2024-11-26 22:51:16.777117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72777 ] 00:06:37.944 [2024-11-26 22:51:16.909866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.944 [2024-11-26 22:51:16.935038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.944 [2024-11-26 22:51:16.958424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72793 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72793 /var/tmp/spdk2.sock 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72793 /var/tmp/spdk2.sock 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72793 /var/tmp/spdk2.sock 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72793 ']' 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.511 22:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 [2024-11-26 22:51:17.658466] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:38.769 [2024-11-26 22:51:17.658755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72793 ] 00:06:38.769 [2024-11-26 22:51:17.802216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.769 [2024-11-26 22:51:17.836018] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72777 has claimed it. 00:06:38.769 [2024-11-26 22:51:17.836087] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.336 ERROR: process (pid: 72793) is no longer running 00:06:39.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72793) - No such process 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72777 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72777 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72777 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72777 ']' 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72777 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.336 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72777 00:06:39.594 killing process with pid 72777 00:06:39.594 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.594 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.594 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72777' 00:06:39.594 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72777 00:06:39.594 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72777 00:06:39.855 ************************************ 00:06:39.855 END TEST locking_app_on_locked_coremask 00:06:39.855 ************************************ 00:06:39.855 00:06:39.855 real 0m2.066s 00:06:39.855 user 0m2.259s 00:06:39.855 sys 0m0.548s 00:06:39.855 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.855 22:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.855 22:51:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.855 22:51:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.855 22:51:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.855 22:51:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.855 ************************************ 00:06:39.855 START TEST locking_overlapped_coremask 00:06:39.855 ************************************ 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72835 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72835 /var/tmp/spdk.sock 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72835 ']' 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.855 22:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.855 [2024-11-26 22:51:18.885727] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:39.855 [2024-11-26 22:51:18.885831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72835 ] 00:06:40.114 [2024-11-26 22:51:19.014384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.114 [2024-11-26 22:51:19.038115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.114 [2024-11-26 22:51:19.062756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.114 [2024-11-26 22:51:19.063047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.114 [2024-11-26 22:51:19.063053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72853 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72853 /var/tmp/spdk2.sock 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72853 /var/tmp/spdk2.sock 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72853 /var/tmp/spdk2.sock 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72853 ']' 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.680 22:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.938 [2024-11-26 22:51:19.814996] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:40.938 [2024-11-26 22:51:19.815141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72853 ] 00:06:40.938 [2024-11-26 22:51:19.952044] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.938 [2024-11-26 22:51:19.997844] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72835 has claimed it. 00:06:40.938 [2024-11-26 22:51:19.997905] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.504 ERROR: process (pid: 72853) is no longer running 00:06:41.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72853) - No such process 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72835 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72835 ']' 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72835 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72835 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72835' 00:06:41.504 killing process with pid 72835 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72835 00:06:41.504 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72835 00:06:41.762 00:06:41.762 real 0m2.010s 00:06:41.762 user 0m5.576s 00:06:41.762 sys 0m0.476s 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.762 ************************************ 00:06:41.762 END TEST locking_overlapped_coremask 00:06:41.762 ************************************ 00:06:41.762 22:51:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:41.762 22:51:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.762 22:51:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.762 22:51:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.762 ************************************ 00:06:41.762 START TEST locking_overlapped_coremask_via_rpc 00:06:41.762 ************************************ 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72895 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72895 /var/tmp/spdk.sock 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72895 ']' 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:41.762 22:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.021 [2024-11-26 22:51:20.968521] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:42.021 [2024-11-26 22:51:20.968652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72895 ] 00:06:42.021 [2024-11-26 22:51:21.103111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.021 [2024-11-26 22:51:21.123447] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.021 [2024-11-26 22:51:21.123493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.280 [2024-11-26 22:51:21.148367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.280 [2024-11-26 22:51:21.148498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.280 [2024-11-26 22:51:21.148511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72913 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72913 /var/tmp/spdk2.sock 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72913 ']' 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.848 22:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.848 [2024-11-26 22:51:21.848960] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:42.848 [2024-11-26 22:51:21.849918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72913 ] 00:06:43.136 [2024-11-26 22:51:21.995292] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.136 [2024-11-26 22:51:22.037016] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.136 [2024-11-26 22:51:22.037074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.136 [2024-11-26 22:51:22.092699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.136 [2024-11-26 22:51:22.096421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.136 [2024-11-26 22:51:22.096485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.713 [2024-11-26 22:51:22.711476] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72895 has claimed it. 00:06:43.713 request: 00:06:43.713 { 00:06:43.713 "method": "framework_enable_cpumask_locks", 00:06:43.713 "req_id": 1 00:06:43.713 } 00:06:43.713 Got JSON-RPC error response 00:06:43.713 response: 00:06:43.713 { 00:06:43.713 "code": -32603, 00:06:43.713 "message": "Failed to claim CPU core: 2" 00:06:43.713 } 00:06:43.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72895 /var/tmp/spdk.sock 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72895 ']' 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.713 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72913 /var/tmp/spdk2.sock 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72913 ']' 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.974 22:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.233 ************************************ 00:06:44.233 END TEST locking_overlapped_coremask_via_rpc 00:06:44.233 ************************************ 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.233 00:06:44.233 real 0m2.259s 00:06:44.233 user 0m1.047s 00:06:44.233 sys 0m0.141s 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.233 22:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.233 22:51:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:44.233 22:51:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72895 ]] 00:06:44.233 22:51:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72895 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72895 ']' 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72895 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72895 00:06:44.233 killing process with pid 72895 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72895' 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72895 00:06:44.233 22:51:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72895 00:06:44.491 22:51:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72913 ]] 00:06:44.491 22:51:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72913 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72913 ']' 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72913 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72913 00:06:44.491 killing process with pid 72913 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72913' 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72913 00:06:44.491 22:51:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72913 00:06:44.751 22:51:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.751 22:51:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:44.751 22:51:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72895 ]] 00:06:44.751 22:51:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72895 00:06:44.751 22:51:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72895 ']' 00:06:44.751 Process with pid 72895 is not found 00:06:44.751 Process with pid 72913 is not found 00:06:44.751 22:51:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72895 00:06:44.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72895) - No such process 00:06:44.751 22:51:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72895 is not found' 00:06:44.751 22:51:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72913 ]] 00:06:44.751 22:51:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72913 00:06:44.751 22:51:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72913 ']' 00:06:44.751 22:51:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72913 00:06:44.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72913) - No such process 00:06:44.751 22:51:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72913 is not found' 00:06:44.751 22:51:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.013 ************************************ 00:06:45.013 END TEST cpu_locks 00:06:45.013 ************************************ 00:06:45.013 00:06:45.013 real 0m16.513s 00:06:45.013 user 0m29.054s 00:06:45.013 sys 0m4.643s 00:06:45.013 22:51:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.013 22:51:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.013 ************************************ 00:06:45.013 END TEST event 00:06:45.013 ************************************ 00:06:45.013 00:06:45.013 real 0m40.080s 00:06:45.013 user 1m17.783s 00:06:45.013 sys 0m7.627s 00:06:45.013 22:51:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.013 22:51:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.013 22:51:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.013 22:51:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.013 22:51:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.013 22:51:23 -- common/autotest_common.sh@10 -- # set +x 00:06:45.013 ************************************ 00:06:45.013 START TEST thread 00:06:45.013 ************************************ 00:06:45.013 22:51:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.013 * Looking for test storage... 00:06:45.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.013 22:51:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.013 22:51:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.013 22:51:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.013 22:51:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.013 22:51:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.013 22:51:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.013 22:51:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.013 22:51:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.013 22:51:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.013 22:51:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.013 22:51:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.013 22:51:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:45.013 22:51:24 thread -- scripts/common.sh@345 -- # : 1 00:06:45.013 22:51:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.013 22:51:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.013 22:51:24 thread -- scripts/common.sh@365 -- # decimal 1 00:06:45.013 22:51:24 thread -- scripts/common.sh@353 -- # local d=1 00:06:45.013 22:51:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.013 22:51:24 thread -- scripts/common.sh@355 -- # echo 1 00:06:45.013 22:51:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.013 22:51:24 thread -- scripts/common.sh@366 -- # decimal 2 00:06:45.013 22:51:24 thread -- scripts/common.sh@353 -- # local d=2 00:06:45.013 22:51:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.013 22:51:24 thread -- scripts/common.sh@355 -- # echo 2 00:06:45.013 22:51:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.013 22:51:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.013 22:51:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.013 22:51:24 thread -- scripts/common.sh@368 -- # return 0 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.013 --rc genhtml_branch_coverage=1 00:06:45.013 --rc genhtml_function_coverage=1 00:06:45.013 --rc genhtml_legend=1 00:06:45.013 --rc geninfo_all_blocks=1 00:06:45.013 --rc geninfo_unexecuted_blocks=1 00:06:45.013 00:06:45.013 ' 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.013 --rc genhtml_branch_coverage=1 00:06:45.013 --rc genhtml_function_coverage=1 00:06:45.013 --rc genhtml_legend=1 00:06:45.013 --rc geninfo_all_blocks=1 00:06:45.013 --rc geninfo_unexecuted_blocks=1 00:06:45.013 00:06:45.013 ' 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.013 --rc genhtml_branch_coverage=1 00:06:45.013 --rc genhtml_function_coverage=1 00:06:45.013 --rc genhtml_legend=1 00:06:45.013 --rc geninfo_all_blocks=1 00:06:45.013 --rc geninfo_unexecuted_blocks=1 00:06:45.013 00:06:45.013 ' 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.013 --rc genhtml_branch_coverage=1 00:06:45.013 --rc genhtml_function_coverage=1 00:06:45.013 --rc genhtml_legend=1 00:06:45.013 --rc geninfo_all_blocks=1 00:06:45.013 --rc geninfo_unexecuted_blocks=1 00:06:45.013 00:06:45.013 ' 00:06:45.013 22:51:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.013 22:51:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.013 ************************************ 00:06:45.013 START TEST thread_poller_perf 00:06:45.013 ************************************ 00:06:45.013 22:51:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.275 [2024-11-26 22:51:24.148862] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:45.275 [2024-11-26 22:51:24.149097] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73040 ] 00:06:45.275 [2024-11-26 22:51:24.278719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.275 [2024-11-26 22:51:24.306610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.275 [2024-11-26 22:51:24.331273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.275 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.661 [2024-11-26T22:51:25.788Z] ====================================== 00:06:46.661 [2024-11-26T22:51:25.788Z] busy:2615943502 (cyc) 00:06:46.661 [2024-11-26T22:51:25.788Z] total_run_count: 306000 00:06:46.661 [2024-11-26T22:51:25.788Z] tsc_hz: 2600000000 (cyc) 00:06:46.661 [2024-11-26T22:51:25.788Z] ====================================== 00:06:46.661 [2024-11-26T22:51:25.788Z] poller_cost: 8548 (cyc), 3287 (nsec) 00:06:46.661 00:06:46.661 ************************************ 00:06:46.661 END TEST thread_poller_perf 00:06:46.661 ************************************ 00:06:46.661 real 0m1.286s 00:06:46.661 user 0m1.097s 00:06:46.661 sys 0m0.081s 00:06:46.661 22:51:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.661 22:51:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.661 22:51:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.661 22:51:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:46.661 22:51:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.661 22:51:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.661 ************************************ 00:06:46.661 START TEST thread_poller_perf 00:06:46.661 ************************************ 00:06:46.661 22:51:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.661 [2024-11-26 22:51:25.472179] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:46.661 [2024-11-26 22:51:25.472591] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73077 ] 00:06:46.661 [2024-11-26 22:51:25.604182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.661 [2024-11-26 22:51:25.635276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.661 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.661 [2024-11-26 22:51:25.660660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.604 [2024-11-26T22:51:26.731Z] ====================================== 00:06:47.604 [2024-11-26T22:51:26.731Z] busy:2603452798 (cyc) 00:06:47.604 [2024-11-26T22:51:26.731Z] total_run_count: 3962000 00:06:47.604 [2024-11-26T22:51:26.731Z] tsc_hz: 2600000000 (cyc) 00:06:47.604 [2024-11-26T22:51:26.731Z] ====================================== 00:06:47.605 [2024-11-26T22:51:26.732Z] poller_cost: 657 (cyc), 252 (nsec) 00:06:47.605 ************************************ 00:06:47.605 END TEST thread_poller_perf 00:06:47.605 ************************************ 00:06:47.605 00:06:47.605 real 0m1.270s 00:06:47.605 user 0m1.095s 00:06:47.605 sys 0m0.067s 00:06:47.605 22:51:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.605 22:51:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.865 22:51:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.865 ************************************ 00:06:47.865 END TEST thread 00:06:47.865 ************************************ 00:06:47.865 00:06:47.865 real 0m2.784s 00:06:47.865 user 0m2.295s 00:06:47.865 sys 0m0.267s 00:06:47.865 22:51:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.865 22:51:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.865 22:51:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:47.865 22:51:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.865 22:51:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.865 22:51:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.865 22:51:26 -- common/autotest_common.sh@10 -- # set +x 00:06:47.865 ************************************ 00:06:47.865 START TEST app_cmdline 00:06:47.865 ************************************ 00:06:47.865 22:51:26 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.865 * Looking for test storage... 00:06:47.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.865 22:51:26 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.865 22:51:26 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.865 22:51:26 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.865 22:51:26 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.865 22:51:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.865 22:51:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:47.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.866 22:51:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.866 --rc genhtml_branch_coverage=1 00:06:47.866 --rc genhtml_function_coverage=1 00:06:47.866 --rc genhtml_legend=1 00:06:47.866 --rc geninfo_all_blocks=1 00:06:47.866 --rc geninfo_unexecuted_blocks=1 00:06:47.866 00:06:47.866 ' 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.866 --rc genhtml_branch_coverage=1 00:06:47.866 --rc genhtml_function_coverage=1 00:06:47.866 --rc genhtml_legend=1 00:06:47.866 --rc geninfo_all_blocks=1 00:06:47.866 --rc geninfo_unexecuted_blocks=1 00:06:47.866 00:06:47.866 ' 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.866 --rc genhtml_branch_coverage=1 00:06:47.866 --rc genhtml_function_coverage=1 00:06:47.866 --rc genhtml_legend=1 00:06:47.866 --rc geninfo_all_blocks=1 00:06:47.866 --rc geninfo_unexecuted_blocks=1 00:06:47.866 00:06:47.866 ' 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.866 --rc genhtml_branch_coverage=1 00:06:47.866 --rc genhtml_function_coverage=1 00:06:47.866 --rc genhtml_legend=1 00:06:47.866 --rc geninfo_all_blocks=1 00:06:47.866 --rc geninfo_unexecuted_blocks=1 00:06:47.866 00:06:47.866 ' 00:06:47.866 22:51:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.866 22:51:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73160 00:06:47.866 22:51:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.866 22:51:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73160 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73160 ']' 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.866 22:51:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.125 [2024-11-26 22:51:27.022018] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:48.125 [2024-11-26 22:51:27.022290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73160 ] 00:06:48.125 [2024-11-26 22:51:27.155143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.125 [2024-11-26 22:51:27.186632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.125 [2024-11-26 22:51:27.212397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.060 22:51:27 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.060 22:51:27 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:49.060 22:51:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:49.060 { 00:06:49.060 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:06:49.060 "fields": { 00:06:49.060 "major": 25, 00:06:49.060 "minor": 1, 00:06:49.060 "patch": 0, 00:06:49.060 "suffix": "-pre", 00:06:49.060 "commit": "2f2acf4eb" 00:06:49.060 } 00:06:49.060 } 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:49.060 22:51:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:49.060 22:51:28 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.321 request: 00:06:49.321 { 00:06:49.321 "method": "env_dpdk_get_mem_stats", 00:06:49.321 "req_id": 1 00:06:49.321 } 00:06:49.321 Got JSON-RPC error response 00:06:49.321 response: 00:06:49.321 { 00:06:49.321 "code": -32601, 00:06:49.321 "message": "Method not found" 00:06:49.321 } 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.321 22:51:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73160 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73160 ']' 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73160 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73160 00:06:49.321 killing process with pid 73160 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73160' 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@973 -- # kill 73160 00:06:49.321 22:51:28 app_cmdline -- common/autotest_common.sh@978 -- # wait 73160 00:06:49.583 00:06:49.583 real 0m1.875s 00:06:49.583 user 0m2.187s 00:06:49.583 sys 0m0.454s 00:06:49.583 22:51:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.583 ************************************ 00:06:49.583 END TEST app_cmdline 00:06:49.583 ************************************ 00:06:49.583 22:51:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.844 22:51:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.844 22:51:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.844 22:51:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.844 22:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:49.844 ************************************ 00:06:49.844 START TEST version 00:06:49.844 ************************************ 00:06:49.844 22:51:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.844 * Looking for test storage... 00:06:49.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.844 22:51:28 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.844 22:51:28 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.844 22:51:28 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.844 22:51:28 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.844 22:51:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.844 22:51:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.844 22:51:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.844 22:51:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.844 22:51:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.844 22:51:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.844 22:51:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.844 22:51:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.844 22:51:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.844 22:51:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.844 22:51:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.844 22:51:28 version -- scripts/common.sh@344 -- # case "$op" in 00:06:49.844 22:51:28 version -- scripts/common.sh@345 -- # : 1 00:06:49.844 22:51:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.845 22:51:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.845 22:51:28 version -- scripts/common.sh@365 -- # decimal 1 00:06:49.845 22:51:28 version -- scripts/common.sh@353 -- # local d=1 00:06:49.845 22:51:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.845 22:51:28 version -- scripts/common.sh@355 -- # echo 1 00:06:49.845 22:51:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.845 22:51:28 version -- scripts/common.sh@366 -- # decimal 2 00:06:49.845 22:51:28 version -- scripts/common.sh@353 -- # local d=2 00:06:49.845 22:51:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.845 22:51:28 version -- scripts/common.sh@355 -- # echo 2 00:06:49.845 22:51:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.845 22:51:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.845 22:51:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.845 22:51:28 version -- scripts/common.sh@368 -- # return 0 00:06:49.845 22:51:28 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.845 22:51:28 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.845 --rc genhtml_branch_coverage=1 00:06:49.845 --rc genhtml_function_coverage=1 00:06:49.845 --rc genhtml_legend=1 00:06:49.845 --rc geninfo_all_blocks=1 00:06:49.845 --rc geninfo_unexecuted_blocks=1 00:06:49.845 00:06:49.845 ' 00:06:49.845 22:51:28 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.845 --rc genhtml_branch_coverage=1 00:06:49.845 --rc genhtml_function_coverage=1 00:06:49.845 --rc genhtml_legend=1 00:06:49.845 --rc geninfo_all_blocks=1 00:06:49.845 --rc geninfo_unexecuted_blocks=1 00:06:49.845 00:06:49.845 ' 00:06:49.845 22:51:28 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.845 --rc genhtml_branch_coverage=1 00:06:49.845 --rc genhtml_function_coverage=1 00:06:49.845 --rc genhtml_legend=1 00:06:49.845 --rc geninfo_all_blocks=1 00:06:49.845 --rc geninfo_unexecuted_blocks=1 00:06:49.845 00:06:49.845 ' 00:06:49.845 22:51:28 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.845 --rc genhtml_branch_coverage=1 00:06:49.845 --rc genhtml_function_coverage=1 00:06:49.845 --rc genhtml_legend=1 00:06:49.845 --rc geninfo_all_blocks=1 00:06:49.845 --rc geninfo_unexecuted_blocks=1 00:06:49.845 00:06:49.845 ' 00:06:49.845 22:51:28 version -- app/version.sh@17 -- # get_header_version major 00:06:49.845 22:51:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # cut -f2 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.845 22:51:28 version -- app/version.sh@17 -- # major=25 00:06:49.845 22:51:28 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.845 22:51:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # cut -f2 00:06:49.845 22:51:28 version -- app/version.sh@18 -- # minor=1 00:06:49.845 22:51:28 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # cut -f2 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.845 22:51:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.845 22:51:28 version -- app/version.sh@19 -- # patch=0 00:06:49.845 22:51:28 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.845 22:51:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # cut -f2 00:06:49.845 22:51:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.845 22:51:28 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.845 22:51:28 version -- app/version.sh@22 -- # version=25.1 00:06:49.845 22:51:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.845 22:51:28 version -- app/version.sh@28 -- # version=25.1rc0 00:06:49.845 22:51:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.845 22:51:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.845 22:51:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:49.845 22:51:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:49.845 00:06:49.845 real 0m0.196s 00:06:49.845 user 0m0.116s 00:06:49.845 sys 0m0.108s 00:06:49.845 ************************************ 00:06:49.845 END TEST version 00:06:49.845 ************************************ 00:06:49.845 22:51:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.845 22:51:28 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.845 22:51:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:49.845 22:51:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:49.845 22:51:28 -- spdk/autotest.sh@194 -- # uname -s 00:06:49.845 22:51:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:49.845 22:51:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.845 22:51:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.845 22:51:28 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:49.845 22:51:28 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:49.845 22:51:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.845 22:51:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.845 22:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:50.107 ************************************ 00:06:50.107 START TEST blockdev_nvme 00:06:50.107 ************************************ 00:06:50.107 22:51:28 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:50.107 * Looking for test storage... 00:06:50.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.107 22:51:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.107 22:51:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73321 00:06:50.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73321 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73321 ']' 00:06:50.107 22:51:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.107 22:51:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.107 [2024-11-26 22:51:29.211512] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:50.107 [2024-11-26 22:51:29.211627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73321 ] 00:06:50.367 [2024-11-26 22:51:29.344590] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.367 [2024-11-26 22:51:29.374721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.367 [2024-11-26 22:51:29.399077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.938 22:51:30 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.938 22:51:30 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:50.939 22:51:30 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:50.939 22:51:30 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:50.939 22:51:30 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:50.939 22:51:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:50.939 22:51:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:51.199 22:51:30 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:51.199 22:51:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.199 22:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.460 22:51:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:51.460 22:51:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:51.461 22:51:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c2d618a8-104e-41fe-9252-0655a34dd989"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c2d618a8-104e-41fe-9252-0655a34dd989",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "eec3b62c-cb80-46da-8ca1-8851c6d51057"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eec3b62c-cb80-46da-8ca1-8851c6d51057",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cb237a5c-8dd3-418d-828d-cb6dc2827ef6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cb237a5c-8dd3-418d-828d-cb6dc2827ef6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "735bbf80-58bf-4d1e-8364-b67348b980d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "735bbf80-58bf-4d1e-8364-b67348b980d4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9ecea29b-dab2-4715-bdfd-dfd0a2158bda"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9ecea29b-dab2-4715-bdfd-dfd0a2158bda",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0a8947b6-7c85-47f0-ac9c-6424ae0c45e8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0a8947b6-7c85-47f0-ac9c-6424ae0c45e8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:51.461 22:51:30 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:51.461 22:51:30 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:51.461 22:51:30 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:51.461 22:51:30 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 73321 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73321 ']' 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73321 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73321 00:06:51.461 killing process with pid 73321 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73321' 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73321 00:06:51.461 22:51:30 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73321 00:06:52.032 22:51:30 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:52.032 22:51:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.032 22:51:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:52.032 22:51:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.032 22:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.032 ************************************ 00:06:52.032 START TEST bdev_hello_world 00:06:52.032 ************************************ 00:06:52.032 22:51:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.032 [2024-11-26 22:51:30.971852] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:52.032 [2024-11-26 22:51:30.971975] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73394 ] 00:06:52.032 [2024-11-26 22:51:31.105900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.032 [2024-11-26 22:51:31.137801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.327 [2024-11-26 22:51:31.161916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.618 [2024-11-26 22:51:31.546208] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:52.618 [2024-11-26 22:51:31.546265] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:52.618 [2024-11-26 22:51:31.546284] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:52.618 [2024-11-26 22:51:31.548534] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:52.618 [2024-11-26 22:51:31.549412] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:52.618 [2024-11-26 22:51:31.549440] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:52.618 [2024-11-26 22:51:31.550281] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:52.618 00:06:52.618 [2024-11-26 22:51:31.550323] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:52.618 00:06:52.618 real 0m0.822s 00:06:52.618 user 0m0.540s 00:06:52.618 sys 0m0.178s 00:06:52.618 22:51:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.618 ************************************ 00:06:52.618 END TEST bdev_hello_world 00:06:52.618 ************************************ 00:06:52.618 22:51:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:52.879 22:51:31 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:52.879 22:51:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.879 22:51:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.879 22:51:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.879 ************************************ 00:06:52.879 START TEST bdev_bounds 00:06:52.879 ************************************ 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73425 00:06:52.879 Process bdevio pid: 73425 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73425' 00:06:52.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73425 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73425 ']' 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.879 22:51:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:52.880 22:51:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:52.880 [2024-11-26 22:51:31.837466] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:52.880 [2024-11-26 22:51:31.837700] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73425 ] 00:06:52.880 [2024-11-26 22:51:31.970884] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.880 [2024-11-26 22:51:31.999413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.141 [2024-11-26 22:51:32.026091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.141 [2024-11-26 22:51:32.026360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.141 [2024-11-26 22:51:32.026439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.713 22:51:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.713 22:51:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:53.713 22:51:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:53.713 I/O targets: 00:06:53.713 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:53.713 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:53.713 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.713 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.713 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.713 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:53.713 00:06:53.713 00:06:53.713 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.713 http://cunit.sourceforge.net/ 00:06:53.713 00:06:53.713 00:06:53.713 Suite: bdevio tests on: Nvme3n1 00:06:53.713 Test: blockdev write read block ...passed 00:06:53.713 Test: blockdev write zeroes read block ...passed 00:06:53.713 Test: blockdev write zeroes read no split ...passed 00:06:53.713 Test: blockdev write zeroes read split ...passed 00:06:53.713 Test: blockdev write zeroes read split partial ...passed 00:06:53.713 Test: blockdev reset ...[2024-11-26 22:51:32.775655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:53.713 passed 00:06:53.713 Test: blockdev write read 8 blocks ...[2024-11-26 22:51:32.777478] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:53.713 passed 00:06:53.713 Test: blockdev write read size > 128k ...passed 00:06:53.713 Test: blockdev write read invalid size ...passed 00:06:53.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.713 Test: blockdev write read max offset ...passed 00:06:53.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.713 Test: blockdev writev readv 8 blocks ...passed 00:06:53.713 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.713 Test: blockdev writev readv block ...passed 00:06:53.713 Test: blockdev writev readv size > 128k ...passed 00:06:53.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.713 Test: blockdev comparev and writev ...[2024-11-26 22:51:32.784846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:53.713 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d260e000 len:0x1000 00:06:53.713 [2024-11-26 22:51:32.785020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.713 passed 00:06:53.713 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.713 Test: blockdev nvme admin passthru ...[2024-11-26 22:51:32.785919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.713 [2024-11-26 22:51:32.785956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.713 passed 00:06:53.713 Test: blockdev copy ...passed 00:06:53.713 Suite: bdevio tests on: Nvme2n3 00:06:53.713 Test: blockdev write read block ...passed 00:06:53.713 Test: blockdev write zeroes read block ...passed 00:06:53.713 Test: blockdev write zeroes read no split ...passed 00:06:53.713 Test: blockdev write zeroes read split ...passed 00:06:53.713 Test: blockdev write zeroes read split partial ...passed 00:06:53.713 Test: blockdev reset ...[2024-11-26 22:51:32.811243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.713 [2024-11-26 22:51:32.813246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:53.713 Test: blockdev write read 8 blocks ...uccessful. 00:06:53.713 passed 00:06:53.713 Test: blockdev write read size > 128k ...passed 00:06:53.713 Test: blockdev write read invalid size ...passed 00:06:53.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.713 Test: blockdev write read max offset ...passed 00:06:53.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.713 Test: blockdev writev readv 8 blocks ...passed 00:06:53.713 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.713 Test: blockdev writev readv block ...passed 00:06:53.713 Test: blockdev writev readv size > 128k ...passed 00:06:53.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.713 Test: blockdev comparev and writev ...[2024-11-26 22:51:32.824976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2606000 len:0x1000 00:06:53.713 [2024-11-26 22:51:32.825027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.713 passed 00:06:53.713 Test: blockdev nvme passthru rw ...passed 00:06:53.713 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.713 Test: blockdev nvme admin passthru ...[2024-11-26 22:51:32.826176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.713 [2024-11-26 22:51:32.826209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.713 passed 00:06:53.713 Test: blockdev copy ...passed 00:06:53.713 Suite: bdevio tests on: Nvme2n2 00:06:53.713 Test: blockdev write read block ...passed 00:06:53.713 Test: blockdev write zeroes read block ...passed 00:06:53.976 Test: blockdev write zeroes read no split ...passed 00:06:53.976 Test: blockdev write zeroes read split ...passed 00:06:53.976 Test: blockdev write zeroes read split partial ...passed 00:06:53.976 Test: blockdev reset ...[2024-11-26 22:51:32.850262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.976 [2024-11-26 22:51:32.852992] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.976 passed 00:06:53.976 Test: blockdev write read 8 blocks ...passed 00:06:53.976 Test: blockdev write read size > 128k ...passed 00:06:53.976 Test: blockdev write read invalid size ...passed 00:06:53.976 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.976 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.976 Test: blockdev write read max offset ...passed 00:06:53.976 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.976 Test: blockdev writev readv 8 blocks ...passed 00:06:53.976 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.976 Test: blockdev writev readv block ...passed 00:06:53.976 Test: blockdev writev readv size > 128k ...passed 00:06:53.976 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.976 Test: blockdev comparev and writev ...[2024-11-26 22:51:32.861885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2608000 len:0x1000 00:06:53.976 [2024-11-26 22:51:32.861926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.976 passed 00:06:53.976 Test: blockdev nvme passthru rw ...passed 00:06:53.976 Test: blockdev nvme passthru vendor specific ...[2024-11-26 22:51:32.862853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.976 [2024-11-26 22:51:32.862877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.976 passed 00:06:53.976 Test: blockdev nvme admin passthru ...passed 00:06:53.976 Test: blockdev copy ...passed 00:06:53.976 Suite: bdevio tests on: Nvme2n1 00:06:53.976 Test: blockdev write read block ...passed 00:06:53.976 Test: blockdev write zeroes read block ...passed 00:06:53.976 Test: blockdev write zeroes read no split ...passed 00:06:53.976 Test: blockdev write zeroes read split ...passed 00:06:53.976 Test: blockdev write zeroes read split partial ...passed 00:06:53.976 Test: blockdev reset ...[2024-11-26 22:51:32.885241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.976 [2024-11-26 22:51:32.887396] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.976 passed 00:06:53.976 Test: blockdev write read 8 blocks ...passed 00:06:53.976 Test: blockdev write read size > 128k ...passed 00:06:53.976 Test: blockdev write read invalid size ...passed 00:06:53.976 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.976 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.976 Test: blockdev write read max offset ...passed 00:06:53.976 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.976 Test: blockdev writev readv 8 blocks ...passed 00:06:53.976 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.976 Test: blockdev writev readv block ...passed 00:06:53.976 Test: blockdev writev readv size > 128k ...passed 00:06:53.976 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.976 Test: blockdev comparev and writev ...[2024-11-26 22:51:32.895040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2204000 len:0x1000 00:06:53.976 [2024-11-26 22:51:32.895078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.976 passed 00:06:53.976 Test: blockdev nvme passthru rw ...passed 00:06:53.976 Test: blockdev nvme passthru vendor specific ...[2024-11-26 22:51:32.895947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:53.976 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:53.976 [2024-11-26 22:51:32.896075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.976 passed 00:06:53.977 Test: blockdev copy ...passed 00:06:53.977 Suite: bdevio tests on: Nvme1n1 00:06:53.977 Test: blockdev write read block ...passed 00:06:53.977 Test: blockdev write zeroes read block ...passed 00:06:53.977 Test: blockdev write zeroes read no split ...passed 00:06:53.977 Test: blockdev write zeroes read split ...passed 00:06:53.977 Test: blockdev write zeroes read split partial ...passed 00:06:53.977 Test: blockdev reset ...[2024-11-26 22:51:32.924024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:53.977 [2024-11-26 22:51:32.925772] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:53.977 passed 00:06:53.977 Test: blockdev write read 8 blocks ...passed 00:06:53.977 Test: blockdev write read size > 128k ...passed 00:06:53.977 Test: blockdev write read invalid size ...passed 00:06:53.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.977 Test: blockdev write read max offset ...passed 00:06:53.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.977 Test: blockdev writev readv 8 blocks ...passed 00:06:53.977 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.977 Test: blockdev writev readv block ...passed 00:06:53.977 Test: blockdev writev readv size > 128k ...passed 00:06:53.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.977 Test: blockdev comparev and writev ...[2024-11-26 22:51:32.934099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2eaa3d000 len:0x1000 00:06:53.977 [2024-11-26 22:51:32.934231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:06:53.977 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.977 passed 00:06:53.977 Test: blockdev nvme passthru vendor specific ...[2024-11-26 22:51:32.935343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.977 [2024-11-26 22:51:32.935451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:53.977 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:53.977 passed 00:06:53.977 Test: blockdev copy ...passed 00:06:53.977 Suite: bdevio tests on: Nvme0n1 00:06:53.977 Test: blockdev write read block ...passed 00:06:53.977 Test: blockdev write zeroes read block ...passed 00:06:53.977 Test: blockdev write zeroes read no split ...passed 00:06:53.977 Test: blockdev write zeroes read split ...passed 00:06:53.977 Test: blockdev write zeroes read split partial ...passed 00:06:53.977 Test: blockdev reset ...[2024-11-26 22:51:32.964624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:53.977 [2024-11-26 22:51:32.967065] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:53.977 Test: blockdev write read 8 blocks ...uccessful. 00:06:53.977 passed 00:06:53.977 Test: blockdev write read size > 128k ...passed 00:06:53.977 Test: blockdev write read invalid size ...passed 00:06:53.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.977 Test: blockdev write read max offset ...passed 00:06:53.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.977 Test: blockdev writev readv 8 blocks ...passed 00:06:53.977 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.977 Test: blockdev writev readv block ...passed 00:06:53.977 Test: blockdev writev readv size > 128k ...passed 00:06:53.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.977 Test: blockdev comparev and writev ...passed 00:06:53.977 Test: blockdev nvme passthru rw ...[2024-11-26 22:51:32.976841] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:53.977 separate metadata which is not supported yet. 00:06:53.977 passed 00:06:53.977 Test: blockdev nvme passthru vendor specific ...[2024-11-26 22:51:32.978192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:53.977 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:53.977 [2024-11-26 22:51:32.978309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:53.977 passed 00:06:53.977 Test: blockdev copy ...passed 00:06:53.977 00:06:53.977 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.977 suites 6 6 n/a 0 0 00:06:53.977 tests 138 138 138 0 0 00:06:53.977 asserts 893 893 893 0 n/a 00:06:53.977 00:06:53.977 Elapsed time = 0.496 seconds 00:06:53.977 0 00:06:53.977 22:51:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73425 00:06:53.977 22:51:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73425 ']' 00:06:53.977 22:51:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73425 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73425 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73425' 00:06:53.977 killing process with pid 73425 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73425 00:06:53.977 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73425 00:06:54.239 22:51:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:54.239 00:06:54.239 real 0m1.426s 00:06:54.239 user 0m3.508s 00:06:54.239 sys 0m0.323s 00:06:54.239 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.239 22:51:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.239 ************************************ 00:06:54.239 END TEST bdev_bounds 00:06:54.239 ************************************ 00:06:54.239 22:51:33 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.239 22:51:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:54.239 22:51:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.239 22:51:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.239 ************************************ 00:06:54.239 START TEST bdev_nbd 00:06:54.239 ************************************ 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:54.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.239 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73468 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73468 /var/tmp/spdk-nbd.sock 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73468 ']' 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.240 22:51:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.240 [2024-11-26 22:51:33.312411] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:06:54.240 [2024-11-26 22:51:33.312530] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.501 [2024-11-26 22:51:33.446143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.501 [2024-11-26 22:51:33.475105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.501 [2024-11-26 22:51:33.500028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.073 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.074 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.074 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:55.074 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:55.074 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:55.074 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:55.074 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:55.074 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.335 1+0 records in 00:06:55.335 1+0 records out 00:06:55.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425722 s, 9.6 MB/s 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:55.335 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.596 1+0 records in 00:06:55.596 1+0 records out 00:06:55.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449236 s, 9.1 MB/s 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:55.596 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:55.857 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:55.857 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:55.857 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:55.857 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.858 1+0 records in 00:06:55.858 1+0 records out 00:06:55.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127945 s, 3.2 MB/s 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:55.858 22:51:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.119 1+0 records in 00:06:56.119 1+0 records out 00:06:56.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509938 s, 8.0 MB/s 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:56.119 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:56.380 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.381 1+0 records in 00:06:56.381 1+0 records out 00:06:56.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705017 s, 5.8 MB/s 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:56.381 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.643 1+0 records in 00:06:56.643 1+0 records out 00:06:56.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000998303 s, 4.1 MB/s 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.643 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:56.644 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd0", 00:06:56.905 "bdev_name": "Nvme0n1" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd1", 00:06:56.905 "bdev_name": "Nvme1n1" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd2", 00:06:56.905 "bdev_name": "Nvme2n1" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd3", 00:06:56.905 "bdev_name": "Nvme2n2" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd4", 00:06:56.905 "bdev_name": "Nvme2n3" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd5", 00:06:56.905 "bdev_name": "Nvme3n1" 00:06:56.905 } 00:06:56.905 ]' 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd0", 00:06:56.905 "bdev_name": "Nvme0n1" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd1", 00:06:56.905 "bdev_name": "Nvme1n1" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd2", 00:06:56.905 "bdev_name": "Nvme2n1" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd3", 00:06:56.905 "bdev_name": "Nvme2n2" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd4", 00:06:56.905 "bdev_name": "Nvme2n3" 00:06:56.905 }, 00:06:56.905 { 00:06:56.905 "nbd_device": "/dev/nbd5", 00:06:56.905 "bdev_name": "Nvme3n1" 00:06:56.905 } 00:06:56.905 ]' 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.905 22:51:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.905 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.167 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.428 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.689 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.950 22:51:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:58.212 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:58.474 /dev/nbd0 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.474 1+0 records in 00:06:58.474 1+0 records out 00:06:58.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806727 s, 5.1 MB/s 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:58.474 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:58.735 /dev/nbd1 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.735 1+0 records in 00:06:58.735 1+0 records out 00:06:58.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00184657 s, 2.2 MB/s 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.735 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:58.736 22:51:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:58.996 /dev/nbd10 00:06:58.996 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.997 1+0 records in 00:06:58.997 1+0 records out 00:06:58.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519265 s, 7.9 MB/s 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:58.997 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:59.257 /dev/nbd11 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.257 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.257 1+0 records in 00:06:59.257 1+0 records out 00:06:59.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000912831 s, 4.5 MB/s 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.258 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:59.519 /dev/nbd12 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.519 1+0 records in 00:06:59.519 1+0 records out 00:06:59.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502191 s, 8.2 MB/s 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.519 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:59.781 /dev/nbd13 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.781 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.781 1+0 records in 00:06:59.781 1+0 records out 00:06:59.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789633 s, 5.2 MB/s 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.782 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.044 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd0", 00:07:00.044 "bdev_name": "Nvme0n1" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd1", 00:07:00.044 "bdev_name": "Nvme1n1" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd10", 00:07:00.044 "bdev_name": "Nvme2n1" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd11", 00:07:00.044 "bdev_name": "Nvme2n2" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd12", 00:07:00.044 "bdev_name": "Nvme2n3" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd13", 00:07:00.044 "bdev_name": "Nvme3n1" 00:07:00.044 } 00:07:00.044 ]' 00:07:00.044 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd0", 00:07:00.044 "bdev_name": "Nvme0n1" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd1", 00:07:00.044 "bdev_name": "Nvme1n1" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd10", 00:07:00.044 "bdev_name": "Nvme2n1" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd11", 00:07:00.044 "bdev_name": "Nvme2n2" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd12", 00:07:00.044 "bdev_name": "Nvme2n3" 00:07:00.044 }, 00:07:00.044 { 00:07:00.044 "nbd_device": "/dev/nbd13", 00:07:00.044 "bdev_name": "Nvme3n1" 00:07:00.044 } 00:07:00.044 ]' 00:07:00.044 22:51:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.044 /dev/nbd1 00:07:00.044 /dev/nbd10 00:07:00.044 /dev/nbd11 00:07:00.044 /dev/nbd12 00:07:00.044 /dev/nbd13' 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.044 /dev/nbd1 00:07:00.044 /dev/nbd10 00:07:00.044 /dev/nbd11 00:07:00.044 /dev/nbd12 00:07:00.044 /dev/nbd13' 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:00.044 256+0 records in 00:07:00.044 256+0 records out 00:07:00.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918891 s, 114 MB/s 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.044 256+0 records in 00:07:00.044 256+0 records out 00:07:00.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132931 s, 7.9 MB/s 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.044 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.305 256+0 records in 00:07:00.305 256+0 records out 00:07:00.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15868 s, 6.6 MB/s 00:07:00.305 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.305 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:00.566 256+0 records in 00:07:00.566 256+0 records out 00:07:00.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162676 s, 6.4 MB/s 00:07:00.566 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.566 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:00.566 256+0 records in 00:07:00.566 256+0 records out 00:07:00.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174871 s, 6.0 MB/s 00:07:00.566 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.566 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:00.829 256+0 records in 00:07:00.829 256+0 records out 00:07:00.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180065 s, 5.8 MB/s 00:07:00.829 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.829 22:51:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:01.090 256+0 records in 00:07:01.090 256+0 records out 00:07:01.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144151 s, 7.3 MB/s 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.091 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.352 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.680 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.941 22:51:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:01.941 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.202 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:02.464 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:02.724 malloc_lvol_verify 00:07:02.724 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:02.986 8f8e07b3-34a8-4243-880f-29157faf7bfa 00:07:02.986 22:51:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:03.247 d5ec232e-3edd-4ad7-b798-5ac4ece6aa88 00:07:03.247 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:03.247 /dev/nbd0 00:07:03.247 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:03.247 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:03.247 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:03.247 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:03.247 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:03.247 mke2fs 1.47.0 (5-Feb-2023) 00:07:03.247 Discarding device blocks: 0/4096 done 00:07:03.247 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:03.247 00:07:03.247 Allocating group tables: 0/1 done 00:07:03.247 Writing inode tables: 0/1 done 00:07:03.507 Creating journal (1024 blocks): done 00:07:03.507 Writing superblocks and filesystem accounting information: 0/1 done 00:07:03.507 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.507 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73468 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73468 ']' 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73468 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73468 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.508 killing process with pid 73468 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73468' 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73468 00:07:03.508 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73468 00:07:03.768 22:51:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:03.768 00:07:03.768 real 0m9.584s 00:07:03.768 user 0m13.436s 00:07:03.768 sys 0m3.297s 00:07:03.768 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.768 ************************************ 00:07:03.768 22:51:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:03.768 END TEST bdev_nbd 00:07:03.768 ************************************ 00:07:03.768 22:51:42 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:03.768 skipping fio tests on NVMe due to multi-ns failures. 00:07:03.768 22:51:42 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:03.768 22:51:42 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:03.768 22:51:42 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:03.768 22:51:42 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:03.768 22:51:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:03.768 22:51:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.768 22:51:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.768 ************************************ 00:07:03.768 START TEST bdev_verify 00:07:03.768 ************************************ 00:07:03.768 22:51:42 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:04.028 [2024-11-26 22:51:42.949993] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:04.028 [2024-11-26 22:51:42.950109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73840 ] 00:07:04.028 [2024-11-26 22:51:43.084431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.028 [2024-11-26 22:51:43.112257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.028 [2024-11-26 22:51:43.140698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.028 [2024-11-26 22:51:43.140830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.599 Running I/O for 5 seconds... 00:07:06.928 19264.00 IOPS, 75.25 MiB/s [2024-11-26T22:51:46.998Z] 19840.00 IOPS, 77.50 MiB/s [2024-11-26T22:51:47.941Z] 19456.00 IOPS, 76.00 MiB/s [2024-11-26T22:51:48.890Z] 20064.00 IOPS, 78.38 MiB/s [2024-11-26T22:51:48.890Z] 20518.40 IOPS, 80.15 MiB/s 00:07:09.763 Latency(us) 00:07:09.763 [2024-11-26T22:51:48.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.763 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x0 length 0xbd0bd 00:07:09.763 Nvme0n1 : 5.06 1645.70 6.43 0.00 0.00 77597.57 15426.17 129055.51 00:07:09.763 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:09.763 Nvme0n1 : 5.06 1747.11 6.82 0.00 0.00 73060.65 16232.76 76626.71 00:07:09.763 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x0 length 0xa0000 00:07:09.763 Nvme1n1 : 5.06 1644.71 6.42 0.00 0.00 77534.03 16333.59 120182.94 00:07:09.763 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0xa0000 length 0xa0000 00:07:09.763 Nvme1n1 : 5.06 1745.48 6.82 0.00 0.00 72882.21 18350.08 75416.81 00:07:09.763 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x0 length 0x80000 00:07:09.763 Nvme2n1 : 5.06 1643.73 6.42 0.00 0.00 77447.54 17039.36 120182.94 00:07:09.763 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x80000 length 0x80000 00:07:09.763 Nvme2n1 : 5.06 1744.99 6.82 0.00 0.00 72729.82 19358.33 71787.13 00:07:09.763 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x0 length 0x80000 00:07:09.763 Nvme2n2 : 5.07 1642.58 6.42 0.00 0.00 77348.32 18753.38 117763.15 00:07:09.763 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x80000 length 0x80000 00:07:09.763 Nvme2n2 : 5.07 1743.72 6.81 0.00 0.00 72603.12 17644.31 68964.04 00:07:09.763 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x0 length 0x80000 00:07:09.763 Nvme2n3 : 5.07 1641.29 6.41 0.00 0.00 77248.92 19559.98 120182.94 00:07:09.763 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x80000 length 0x80000 00:07:09.763 Nvme2n3 : 5.08 1751.21 6.84 0.00 0.00 72215.55 5041.23 75416.81 00:07:09.763 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x0 length 0x20000 00:07:09.763 Nvme3n1 : 5.07 1640.83 6.41 0.00 0.00 77131.30 18249.26 129862.10 00:07:09.763 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.763 Verification LBA range: start 0x20000 length 0x20000 00:07:09.763 Nvme3n1 : 5.09 1760.78 6.88 0.00 0.00 71837.12 6351.95 77836.60 00:07:09.763 [2024-11-26T22:51:48.890Z] =================================================================================================================== 00:07:09.763 [2024-11-26T22:51:48.890Z] Total : 20352.14 79.50 0.00 0.00 74892.13 5041.23 129862.10 00:07:10.334 00:07:10.334 real 0m6.457s 00:07:10.334 user 0m12.125s 00:07:10.334 sys 0m0.256s 00:07:10.334 22:51:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.334 22:51:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:10.334 ************************************ 00:07:10.334 END TEST bdev_verify 00:07:10.334 ************************************ 00:07:10.334 22:51:49 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:10.334 22:51:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:10.334 22:51:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.334 22:51:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.334 ************************************ 00:07:10.334 START TEST bdev_verify_big_io 00:07:10.334 ************************************ 00:07:10.334 22:51:49 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:10.596 [2024-11-26 22:51:49.462382] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:10.596 [2024-11-26 22:51:49.462611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73934 ] 00:07:10.596 [2024-11-26 22:51:49.595869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.596 [2024-11-26 22:51:49.623880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.596 [2024-11-26 22:51:49.650852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.596 [2024-11-26 22:51:49.650949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.167 Running I/O for 5 seconds... 00:07:16.415 830.00 IOPS, 51.88 MiB/s [2024-11-26T22:51:56.113Z] 1929.00 IOPS, 120.56 MiB/s [2024-11-26T22:51:56.685Z] 2543.33 IOPS, 158.96 MiB/s 00:07:17.558 Latency(us) 00:07:17.558 [2024-11-26T22:51:56.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.558 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x0 length 0xbd0b 00:07:17.558 Nvme0n1 : 5.72 123.08 7.69 0.00 0.00 995448.87 23290.49 987274.63 00:07:17.558 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:17.558 Nvme0n1 : 5.72 89.52 5.59 0.00 0.00 1350376.96 17442.66 1780966.01 00:07:17.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x0 length 0xa000 00:07:17.558 Nvme1n1 : 5.86 128.08 8.01 0.00 0.00 942985.74 83886.08 877577.45 00:07:17.558 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0xa000 length 0xa000 00:07:17.558 Nvme1n1 : 5.93 97.17 6.07 0.00 0.00 1184884.75 29239.14 1445421.69 00:07:17.558 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x0 length 0x8000 00:07:17.558 Nvme2n1 : 5.81 123.42 7.71 0.00 0.00 949005.43 83079.48 1206669.00 00:07:17.558 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x8000 length 0x8000 00:07:17.558 Nvme2n1 : 6.00 103.15 6.45 0.00 0.00 1050694.29 28029.24 1142141.24 00:07:17.558 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x0 length 0x8000 00:07:17.558 Nvme2n2 : 5.86 128.38 8.02 0.00 0.00 890568.14 45169.43 1226027.32 00:07:17.558 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x8000 length 0x8000 00:07:17.558 Nvme2n2 : 6.07 120.55 7.53 0.00 0.00 874686.87 16434.41 1380893.93 00:07:17.558 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x0 length 0x8000 00:07:17.558 Nvme2n3 : 5.90 126.68 7.92 0.00 0.00 876920.74 44564.48 1806777.11 00:07:17.558 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x8000 length 0x8000 00:07:17.558 Nvme2n3 : 6.24 160.43 10.03 0.00 0.00 625922.56 7763.50 1413157.81 00:07:17.558 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x0 length 0x2000 00:07:17.558 Nvme3n1 : 5.92 147.37 9.21 0.00 0.00 735433.87 2810.49 935652.43 00:07:17.558 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.558 Verification LBA range: start 0x2000 length 0x2000 00:07:17.558 Nvme3n1 : 6.49 285.96 17.87 0.00 0.00 335181.39 327.68 1438968.91 00:07:17.558 [2024-11-26T22:51:56.685Z] =================================================================================================================== 00:07:17.558 [2024-11-26T22:51:56.685Z] Total : 1633.81 102.11 0.00 0.00 809899.06 327.68 1806777.11 00:07:18.501 ************************************ 00:07:18.501 END TEST bdev_verify_big_io 00:07:18.501 ************************************ 00:07:18.501 00:07:18.501 real 0m8.137s 00:07:18.501 user 0m15.520s 00:07:18.502 sys 0m0.239s 00:07:18.502 22:51:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.502 22:51:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:18.502 22:51:57 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.502 22:51:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:18.502 22:51:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.502 22:51:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.502 ************************************ 00:07:18.502 START TEST bdev_write_zeroes 00:07:18.502 ************************************ 00:07:18.502 22:51:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.763 [2024-11-26 22:51:57.641210] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:18.763 [2024-11-26 22:51:57.641371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74042 ] 00:07:18.763 [2024-11-26 22:51:57.773333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.763 [2024-11-26 22:51:57.803853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.763 [2024-11-26 22:51:57.822785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.333 Running I/O for 1 seconds... 00:07:20.281 66432.00 IOPS, 259.50 MiB/s 00:07:20.281 Latency(us) 00:07:20.281 [2024-11-26T22:51:59.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.281 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.281 Nvme0n1 : 1.02 10975.72 42.87 0.00 0.00 11579.48 4637.93 24702.03 00:07:20.281 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.281 Nvme1n1 : 1.02 11006.63 42.99 0.00 0.00 11598.46 9225.45 21778.12 00:07:20.281 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.281 Nvme2n1 : 1.02 10994.25 42.95 0.00 0.00 11551.10 8973.39 19963.27 00:07:20.282 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.282 Nvme2n2 : 1.03 10981.91 42.90 0.00 0.00 11549.33 8973.39 19559.98 00:07:20.282 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.282 Nvme2n3 : 1.03 10969.56 42.85 0.00 0.00 11513.18 8922.98 19559.98 00:07:20.282 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.282 Nvme3n1 : 1.03 10957.21 42.80 0.00 0.00 11506.29 7864.32 20568.22 00:07:20.282 [2024-11-26T22:51:59.409Z] =================================================================================================================== 00:07:20.282 [2024-11-26T22:51:59.409Z] Total : 65885.27 257.36 0.00 0.00 11549.61 4637.93 24702.03 00:07:20.545 00:07:20.545 real 0m1.836s 00:07:20.545 user 0m1.553s 00:07:20.545 sys 0m0.174s 00:07:20.545 22:51:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.545 22:51:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:20.545 ************************************ 00:07:20.545 END TEST bdev_write_zeroes 00:07:20.545 ************************************ 00:07:20.545 22:51:59 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.545 22:51:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:20.545 22:51:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.545 22:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.545 ************************************ 00:07:20.545 START TEST bdev_json_nonenclosed 00:07:20.545 ************************************ 00:07:20.545 22:51:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.545 [2024-11-26 22:51:59.521967] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:20.545 [2024-11-26 22:51:59.522077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74080 ] 00:07:20.545 [2024-11-26 22:51:59.656515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.805 [2024-11-26 22:51:59.684652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.805 [2024-11-26 22:51:59.703808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.805 [2024-11-26 22:51:59.703886] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:20.805 [2024-11-26 22:51:59.703903] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:20.805 [2024-11-26 22:51:59.703912] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.805 00:07:20.805 real 0m0.314s 00:07:20.805 user 0m0.118s 00:07:20.805 sys 0m0.092s 00:07:20.805 22:51:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.805 22:51:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:20.805 ************************************ 00:07:20.805 END TEST bdev_json_nonenclosed 00:07:20.805 ************************************ 00:07:20.805 22:51:59 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.805 22:51:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:20.805 22:51:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.805 22:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.805 ************************************ 00:07:20.805 START TEST bdev_json_nonarray 00:07:20.805 ************************************ 00:07:20.805 22:51:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.805 [2024-11-26 22:51:59.880604] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:20.805 [2024-11-26 22:51:59.880714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74105 ] 00:07:21.066 [2024-11-26 22:52:00.010907] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.066 [2024-11-26 22:52:00.033554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.066 [2024-11-26 22:52:00.053504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.066 [2024-11-26 22:52:00.053591] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:21.066 [2024-11-26 22:52:00.053607] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.066 [2024-11-26 22:52:00.053619] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.066 00:07:21.066 real 0m0.303s 00:07:21.066 user 0m0.111s 00:07:21.066 sys 0m0.089s 00:07:21.066 22:52:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.066 22:52:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:21.066 ************************************ 00:07:21.066 END TEST bdev_json_nonarray 00:07:21.066 ************************************ 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:21.066 22:52:00 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:21.066 00:07:21.066 real 0m31.179s 00:07:21.066 user 0m48.932s 00:07:21.066 sys 0m5.400s 00:07:21.066 22:52:00 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.066 22:52:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.066 ************************************ 00:07:21.066 END TEST blockdev_nvme 00:07:21.066 ************************************ 00:07:21.066 22:52:00 -- spdk/autotest.sh@209 -- # uname -s 00:07:21.326 22:52:00 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:21.326 22:52:00 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:21.326 22:52:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.326 22:52:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.326 22:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:21.326 ************************************ 00:07:21.326 START TEST blockdev_nvme_gpt 00:07:21.326 ************************************ 00:07:21.326 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:21.326 * Looking for test storage... 00:07:21.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:21.326 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.326 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.326 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.326 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.326 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.326 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.326 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.327 22:52:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.327 --rc genhtml_branch_coverage=1 00:07:21.327 --rc genhtml_function_coverage=1 00:07:21.327 --rc genhtml_legend=1 00:07:21.327 --rc geninfo_all_blocks=1 00:07:21.327 --rc geninfo_unexecuted_blocks=1 00:07:21.327 00:07:21.327 ' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.327 --rc genhtml_branch_coverage=1 00:07:21.327 --rc genhtml_function_coverage=1 00:07:21.327 --rc genhtml_legend=1 00:07:21.327 --rc geninfo_all_blocks=1 00:07:21.327 --rc geninfo_unexecuted_blocks=1 00:07:21.327 00:07:21.327 ' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.327 --rc genhtml_branch_coverage=1 00:07:21.327 --rc genhtml_function_coverage=1 00:07:21.327 --rc genhtml_legend=1 00:07:21.327 --rc geninfo_all_blocks=1 00:07:21.327 --rc geninfo_unexecuted_blocks=1 00:07:21.327 00:07:21.327 ' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.327 --rc genhtml_branch_coverage=1 00:07:21.327 --rc genhtml_function_coverage=1 00:07:21.327 --rc genhtml_legend=1 00:07:21.327 --rc geninfo_all_blocks=1 00:07:21.327 --rc geninfo_unexecuted_blocks=1 00:07:21.327 00:07:21.327 ' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74178 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74178 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74178 ']' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.327 22:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:21.327 22:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.327 [2024-11-26 22:52:00.431210] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:21.327 [2024-11-26 22:52:00.431339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74178 ] 00:07:21.588 [2024-11-26 22:52:00.563517] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.588 [2024-11-26 22:52:00.590710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.588 [2024-11-26 22:52:00.612128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.526 22:52:01 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.526 22:52:01 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:22.526 22:52:01 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:22.526 22:52:01 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:22.526 22:52:01 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:22.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:22.785 Waiting for block devices as requested 00:07:22.785 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.785 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.785 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.045 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.336 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:28.336 BYT; 00:07:28.336 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:28.336 BYT; 00:07:28.336 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:28.336 22:52:07 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:28.336 22:52:07 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:29.276 The operation has completed successfully. 00:07:29.276 22:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:30.208 The operation has completed successfully. 00:07:30.208 22:52:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:30.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.029 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.029 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.029 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.029 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.029 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:31.029 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.029 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.029 [] 00:07:31.029 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.029 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:31.029 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:31.029 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:31.029 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:31.289 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:31.289 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.290 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:31.551 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:31.552 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a13d59de-b10c-42a7-91e2-d642a58f95f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a13d59de-b10c-42a7-91e2-d642a58f95f5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e56588a1-03ba-46de-9b28-1ce619207916"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e56588a1-03ba-46de-9b28-1ce619207916",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "aa0d07ef-cc6a-4cc9-ba9f-a86a1c4d7675"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa0d07ef-cc6a-4cc9-ba9f-a86a1c4d7675",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cfa9d955-7fdc-44c5-8681-ca37d9a64908"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cfa9d955-7fdc-44c5-8681-ca37d9a64908",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0c93d7a9-ce91-4034-8103-acdef600e3b4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0c93d7a9-ce91-4034-8103-acdef600e3b4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:31.552 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:31.552 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:31.552 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:31.552 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 74178 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74178 ']' 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74178 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74178 00:07:31.552 killing process with pid 74178 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74178' 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74178 00:07:31.552 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74178 00:07:32.123 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:32.123 22:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.123 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:32.123 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.123 22:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.123 ************************************ 00:07:32.123 START TEST bdev_hello_world 00:07:32.123 ************************************ 00:07:32.123 22:52:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.123 [2024-11-26 22:52:11.024257] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:32.123 [2024-11-26 22:52:11.024386] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74792 ] 00:07:32.123 [2024-11-26 22:52:11.156565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.123 [2024-11-26 22:52:11.186407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.123 [2024-11-26 22:52:11.210120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.696 [2024-11-26 22:52:11.599263] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:32.696 [2024-11-26 22:52:11.599320] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:32.696 [2024-11-26 22:52:11.599348] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:32.696 [2024-11-26 22:52:11.601588] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:32.696 [2024-11-26 22:52:11.602002] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:32.696 [2024-11-26 22:52:11.602031] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:32.696 [2024-11-26 22:52:11.602283] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:32.696 00:07:32.696 [2024-11-26 22:52:11.602325] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:32.696 00:07:32.696 real 0m0.820s 00:07:32.696 user 0m0.529s 00:07:32.696 sys 0m0.186s 00:07:32.696 22:52:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.696 22:52:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:32.696 ************************************ 00:07:32.696 END TEST bdev_hello_world 00:07:32.696 ************************************ 00:07:32.696 22:52:11 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:32.696 22:52:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.696 22:52:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.696 22:52:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.696 ************************************ 00:07:32.696 START TEST bdev_bounds 00:07:32.696 ************************************ 00:07:32.696 22:52:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:32.696 22:52:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74823 00:07:32.696 22:52:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.696 Process bdevio pid: 74823 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74823' 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74823 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74823 ']' 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.697 22:52:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:32.959 [2024-11-26 22:52:11.887712] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:32.959 [2024-11-26 22:52:11.887847] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74823 ] 00:07:32.959 [2024-11-26 22:52:12.022530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.959 [2024-11-26 22:52:12.051036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.959 [2024-11-26 22:52:12.077970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.959 [2024-11-26 22:52:12.078115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.959 [2024-11-26 22:52:12.078227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.903 22:52:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.904 22:52:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:33.904 22:52:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:33.904 I/O targets: 00:07:33.904 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:33.904 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:33.904 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:33.904 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:33.904 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:33.904 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:33.904 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:33.904 00:07:33.904 00:07:33.904 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.904 http://cunit.sourceforge.net/ 00:07:33.904 00:07:33.904 00:07:33.904 Suite: bdevio tests on: Nvme3n1 00:07:33.904 Test: blockdev write read block ...passed 00:07:33.904 Test: blockdev write zeroes read block ...passed 00:07:33.904 Test: blockdev write zeroes read no split ...passed 00:07:33.904 Test: blockdev write zeroes read split ...passed 00:07:33.904 Test: blockdev write zeroes read split partial ...passed 00:07:33.904 Test: blockdev reset ...[2024-11-26 22:52:12.811039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:33.904 [2024-11-26 22:52:12.813057] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:33.904 passed 00:07:33.904 Test: blockdev write read 8 blocks ...passed 00:07:33.904 Test: blockdev write read size > 128k ...passed 00:07:33.904 Test: blockdev write read invalid size ...passed 00:07:33.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.904 Test: blockdev write read max offset ...passed 00:07:33.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.904 Test: blockdev writev readv 8 blocks ...passed 00:07:33.904 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.904 Test: blockdev writev readv block ...passed 00:07:33.904 Test: blockdev writev readv size > 128k ...passed 00:07:33.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.904 Test: blockdev comparev and writev ...[2024-11-26 22:52:12.819265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cba0e000 len:0x1000 00:07:33.904 [2024-11-26 22:52:12.819332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.904 passed 00:07:33.904 Test: blockdev nvme passthru rw ...passed 00:07:33.904 Test: blockdev nvme passthru vendor specific ...[2024-11-26 22:52:12.820239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:33.904 passed 00:07:33.904 Test: blockdev nvme admin passthru ...[2024-11-26 22:52:12.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.904 passed 00:07:33.904 Test: blockdev copy ...passed 00:07:33.904 Suite: bdevio tests on: Nvme2n3 00:07:33.904 Test: blockdev write read block ...passed 00:07:33.904 Test: blockdev write zeroes read block ...passed 00:07:33.904 Test: blockdev write zeroes read no split ...passed 00:07:33.904 Test: blockdev write zeroes read split ...passed 00:07:33.904 Test: blockdev write zeroes read split partial ...passed 00:07:33.904 Test: blockdev reset ...[2024-11-26 22:52:12.833814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:33.904 passed 00:07:33.904 Test: blockdev write read 8 blocks ...[2024-11-26 22:52:12.836049] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:33.904 passed 00:07:33.904 Test: blockdev write read size > 128k ...passed 00:07:33.904 Test: blockdev write read invalid size ...passed 00:07:33.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.904 Test: blockdev write read max offset ...passed 00:07:33.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.904 Test: blockdev writev readv 8 blocks ...passed 00:07:33.904 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.904 Test: blockdev writev readv block ...passed 00:07:33.904 Test: blockdev writev readv size > 128k ...passed 00:07:33.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.904 Test: blockdev comparev and writev ...[2024-11-26 22:52:12.841988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cba06000 len:0x1000 00:07:33.904 [2024-11-26 22:52:12.842028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.904 passed 00:07:33.904 Test: blockdev nvme passthru rw ...passed 00:07:33.904 Test: blockdev nvme passthru vendor specific ...passed 00:07:33.904 Test: blockdev nvme admin passthru ...[2024-11-26 22:52:12.842589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:33.904 [2024-11-26 22:52:12.842616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.904 passed 00:07:33.904 Test: blockdev copy ...passed 00:07:33.904 Suite: bdevio tests on: Nvme2n2 00:07:33.904 Test: blockdev write read block ...passed 00:07:33.904 Test: blockdev write zeroes read block ...passed 00:07:33.904 Test: blockdev write zeroes read no split ...passed 00:07:33.904 Test: blockdev write zeroes read split ...passed 00:07:33.904 Test: blockdev write zeroes read split partial ...passed 00:07:33.904 Test: blockdev reset ...[2024-11-26 22:52:12.858597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:33.904 passed 00:07:33.904 Test: blockdev write read 8 blocks ...[2024-11-26 22:52:12.862566] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:33.904 passed 00:07:33.904 Test: blockdev write read size > 128k ...passed 00:07:33.904 Test: blockdev write read invalid size ...passed 00:07:33.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.904 Test: blockdev write read max offset ...passed 00:07:33.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.904 Test: blockdev writev readv 8 blocks ...passed 00:07:33.904 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.904 Test: blockdev writev readv block ...passed 00:07:33.904 Test: blockdev writev readv size > 128k ...passed 00:07:33.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.904 Test: blockdev comparev and writev ...[2024-11-26 22:52:12.869338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cba08000 len:0x1000 00:07:33.904 [2024-11-26 22:52:12.869374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.904 passed 00:07:33.904 Test: blockdev nvme passthru rw ...passed 00:07:33.904 Test: blockdev nvme passthru vendor specific ...passed 00:07:33.904 Test: blockdev nvme admin passthru ...[2024-11-26 22:52:12.870240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:33.904 [2024-11-26 22:52:12.870268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.904 passed 00:07:33.904 Test: blockdev copy ...passed 00:07:33.904 Suite: bdevio tests on: Nvme2n1 00:07:33.904 Test: blockdev write read block ...passed 00:07:33.904 Test: blockdev write zeroes read block ...passed 00:07:33.904 Test: blockdev write zeroes read no split ...passed 00:07:33.904 Test: blockdev write zeroes read split ...passed 00:07:33.904 Test: blockdev write zeroes read split partial ...passed 00:07:33.904 Test: blockdev reset ...[2024-11-26 22:52:12.893236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:33.904 [2024-11-26 22:52:12.896676] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:33.904 passed 00:07:33.904 Test: blockdev write read 8 blocks ...passed 00:07:33.904 Test: blockdev write read size > 128k ...passed 00:07:33.904 Test: blockdev write read invalid size ...passed 00:07:33.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.904 Test: blockdev write read max offset ...passed 00:07:33.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.904 Test: blockdev writev readv 8 blocks ...passed 00:07:33.904 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.904 Test: blockdev writev readv block ...passed 00:07:33.904 Test: blockdev writev readv size > 128k ...passed 00:07:33.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.904 Test: blockdev comparev and writev ...[2024-11-26 22:52:12.904398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ec23d000 len:0x1000 00:07:33.904 [2024-11-26 22:52:12.904544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:07:33.904 Test: blockdev nvme passthru rw ...passed 00:07:33.904 Test: blockdev nvme passthru vendor specific ...0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.904 [2024-11-26 22:52:12.905247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:33.904 [2024-11-26 22:52:12.905274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.904 passed 00:07:33.904 Test: blockdev nvme admin passthru ...passed 00:07:33.904 Test: blockdev copy ...passed 00:07:33.904 Suite: bdevio tests on: Nvme1n1p2 00:07:33.904 Test: blockdev write read block ...passed 00:07:33.904 Test: blockdev write zeroes read block ...passed 00:07:33.904 Test: blockdev write zeroes read no split ...passed 00:07:33.904 Test: blockdev write zeroes read split ...passed 00:07:33.904 Test: blockdev write zeroes read split partial ...passed 00:07:33.904 Test: blockdev reset ...[2024-11-26 22:52:12.928676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:33.904 passed 00:07:33.904 Test: blockdev write read 8 blocks ...[2024-11-26 22:52:12.930702] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:33.904 passed 00:07:33.904 Test: blockdev write read size > 128k ...passed 00:07:33.904 Test: blockdev write read invalid size ...passed 00:07:33.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.905 Test: blockdev write read max offset ...passed 00:07:33.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.905 Test: blockdev writev readv 8 blocks ...passed 00:07:33.905 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.905 Test: blockdev writev readv block ...passed 00:07:33.905 Test: blockdev writev readv size > 128k ...passed 00:07:33.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.905 Test: blockdev comparev and writev ...[2024-11-26 22:52:12.937221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ec239000 len:0x1000 00:07:33.905 [2024-11-26 22:52:12.937260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.905 passed 00:07:33.905 Test: blockdev nvme passthru rw ...passed 00:07:33.905 Test: blockdev nvme passthru vendor specific ...passed 00:07:33.905 Test: blockdev nvme admin passthru ...passed 00:07:33.905 Test: blockdev copy ...passed 00:07:33.905 Suite: bdevio tests on: Nvme1n1p1 00:07:33.905 Test: blockdev write read block ...passed 00:07:33.905 Test: blockdev write zeroes read block ...passed 00:07:33.905 Test: blockdev write zeroes read no split ...passed 00:07:33.905 Test: blockdev write zeroes read split ...passed 00:07:33.905 Test: blockdev write zeroes read split partial ...passed 00:07:33.905 Test: blockdev reset ...[2024-11-26 22:52:12.950447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:33.905 [2024-11-26 22:52:12.952099] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:33.905 Test: blockdev write read 8 blocks ...passed 00:07:33.905 Test: blockdev write read size > 128k ...uccessful. 00:07:33.905 passed 00:07:33.905 Test: blockdev write read invalid size ...passed 00:07:33.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.905 Test: blockdev write read max offset ...passed 00:07:33.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.905 Test: blockdev writev readv 8 blocks ...passed 00:07:33.905 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.905 Test: blockdev writev readv block ...passed 00:07:33.905 Test: blockdev writev readv size > 128k ...passed 00:07:33.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.905 Test: blockdev comparev and writev ...[2024-11-26 22:52:12.962914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ec235000 len:0x1000 00:07:33.905 [2024-11-26 22:52:12.962952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.905 passed 00:07:33.905 Test: blockdev nvme passthru rw ...passed 00:07:33.905 Test: blockdev nvme passthru vendor specific ...passed 00:07:33.905 Test: blockdev nvme admin passthru ...passed 00:07:33.905 Test: blockdev copy ...passed 00:07:33.905 Suite: bdevio tests on: Nvme0n1 00:07:33.905 Test: blockdev write read block ...passed 00:07:33.905 Test: blockdev write zeroes read block ...passed 00:07:33.905 Test: blockdev write zeroes read no split ...passed 00:07:33.905 Test: blockdev write zeroes read split ...passed 00:07:33.905 Test: blockdev write zeroes read split partial ...passed 00:07:33.905 Test: blockdev reset ...[2024-11-26 22:52:12.981463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:33.905 passed 00:07:33.905 Test: blockdev write read 8 blocks ...[2024-11-26 22:52:12.984740] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:33.905 passed 00:07:33.905 Test: blockdev write read size > 128k ...passed 00:07:33.905 Test: blockdev write read invalid size ...passed 00:07:33.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.905 Test: blockdev write read max offset ...passed 00:07:33.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.905 Test: blockdev writev readv 8 blocks ...passed 00:07:33.905 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.905 Test: blockdev writev readv block ...passed 00:07:33.905 Test: blockdev writev readv size > 128k ...passed 00:07:33.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.905 Test: blockdev comparev and writev ...passed 00:07:33.905 Test: blockdev nvme passthru rw ...[2024-11-26 22:52:12.990624] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:33.905 separate metadata which is not supported yet. 00:07:33.905 passed 00:07:33.905 Test: blockdev nvme passthru vendor specific ...[2024-11-26 22:52:12.991082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:33.905 passed 00:07:33.905 Test: blockdev nvme admin passthru ...[2024-11-26 22:52:12.991114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:33.905 passed 00:07:33.905 Test: blockdev copy ...passed 00:07:33.905 00:07:33.905 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.905 suites 7 7 n/a 0 0 00:07:33.905 tests 161 161 161 0 0 00:07:33.905 asserts 1025 1025 1025 0 n/a 00:07:33.905 00:07:33.905 Elapsed time = 0.440 seconds 00:07:33.905 0 00:07:33.905 22:52:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74823 00:07:33.905 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74823 ']' 00:07:33.905 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74823 00:07:33.905 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:33.905 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.905 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74823 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74823' 00:07:34.166 killing process with pid 74823 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74823 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74823 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:34.166 00:07:34.166 real 0m1.388s 00:07:34.166 user 0m3.437s 00:07:34.166 sys 0m0.305s 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:34.166 ************************************ 00:07:34.166 END TEST bdev_bounds 00:07:34.166 ************************************ 00:07:34.166 22:52:13 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:34.166 22:52:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.166 22:52:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.166 22:52:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.166 ************************************ 00:07:34.166 START TEST bdev_nbd 00:07:34.166 ************************************ 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74871 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:34.166 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74871 /var/tmp/spdk-nbd.sock 00:07:34.167 22:52:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74871 ']' 00:07:34.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:34.167 22:52:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:34.167 22:52:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.167 22:52:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:34.167 22:52:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.167 22:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:34.167 22:52:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:34.428 [2024-11-26 22:52:13.321195] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:34.428 [2024-11-26 22:52:13.321333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.428 [2024-11-26 22:52:13.456005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.428 [2024-11-26 22:52:13.486454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.428 [2024-11-26 22:52:13.510765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:35.371 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.371 1+0 records in 00:07:35.371 1+0 records out 00:07:35.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465837 s, 8.8 MB/s 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:35.372 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.633 1+0 records in 00:07:35.633 1+0 records out 00:07:35.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823391 s, 5.0 MB/s 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:35.633 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.895 1+0 records in 00:07:35.895 1+0 records out 00:07:35.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540152 s, 7.6 MB/s 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:35.895 22:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.156 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.157 1+0 records in 00:07:36.157 1+0 records out 00:07:36.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484229 s, 8.5 MB/s 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.157 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:36.417 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.418 1+0 records in 00:07:36.418 1+0 records out 00:07:36.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597022 s, 6.9 MB/s 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.418 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.679 1+0 records in 00:07:36.679 1+0 records out 00:07:36.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717796 s, 5.7 MB/s 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.679 1+0 records in 00:07:36.679 1+0 records out 00:07:36.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450629 s, 9.1 MB/s 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.679 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.941 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd0", 00:07:36.941 "bdev_name": "Nvme0n1" 00:07:36.941 }, 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd1", 00:07:36.941 "bdev_name": "Nvme1n1p1" 00:07:36.941 }, 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd2", 00:07:36.941 "bdev_name": "Nvme1n1p2" 00:07:36.941 }, 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd3", 00:07:36.941 "bdev_name": "Nvme2n1" 00:07:36.941 }, 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd4", 00:07:36.941 "bdev_name": "Nvme2n2" 00:07:36.941 }, 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd5", 00:07:36.941 "bdev_name": "Nvme2n3" 00:07:36.941 }, 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd6", 00:07:36.941 "bdev_name": "Nvme3n1" 00:07:36.941 } 00:07:36.941 ]' 00:07:36.941 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:36.941 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:36.941 { 00:07:36.941 "nbd_device": "/dev/nbd0", 00:07:36.941 "bdev_name": "Nvme0n1" 00:07:36.941 }, 00:07:36.941 { 00:07:36.942 "nbd_device": "/dev/nbd1", 00:07:36.942 "bdev_name": "Nvme1n1p1" 00:07:36.942 }, 00:07:36.942 { 00:07:36.942 "nbd_device": "/dev/nbd2", 00:07:36.942 "bdev_name": "Nvme1n1p2" 00:07:36.942 }, 00:07:36.942 { 00:07:36.942 "nbd_device": "/dev/nbd3", 00:07:36.942 "bdev_name": "Nvme2n1" 00:07:36.942 }, 00:07:36.942 { 00:07:36.942 "nbd_device": "/dev/nbd4", 00:07:36.942 "bdev_name": "Nvme2n2" 00:07:36.942 }, 00:07:36.942 { 00:07:36.942 "nbd_device": "/dev/nbd5", 00:07:36.942 "bdev_name": "Nvme2n3" 00:07:36.942 }, 00:07:36.942 { 00:07:36.942 "nbd_device": "/dev/nbd6", 00:07:36.942 "bdev_name": "Nvme3n1" 00:07:36.942 } 00:07:36.942 ]' 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.942 22:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.204 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.466 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.728 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.987 22:52:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.987 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.248 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:38.554 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:38.555 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:38.555 /dev/nbd0 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.813 1+0 records in 00:07:38.813 1+0 records out 00:07:38.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512298 s, 8.0 MB/s 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:38.813 /dev/nbd1 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.813 1+0 records in 00:07:38.813 1+0 records out 00:07:38.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040153 s, 10.2 MB/s 00:07:38.813 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.076 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:39.076 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.076 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.076 22:52:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:39.076 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.076 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:39.076 22:52:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:39.076 /dev/nbd10 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.076 1+0 records in 00:07:39.076 1+0 records out 00:07:39.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000854181 s, 4.8 MB/s 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:39.076 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:39.338 /dev/nbd11 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.338 1+0 records in 00:07:39.338 1+0 records out 00:07:39.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00268967 s, 1.5 MB/s 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:39.338 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:39.598 /dev/nbd12 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:39.598 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.599 1+0 records in 00:07:39.599 1+0 records out 00:07:39.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469932 s, 8.7 MB/s 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:39.599 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:39.860 /dev/nbd13 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.860 1+0 records in 00:07:39.860 1+0 records out 00:07:39.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668085 s, 6.1 MB/s 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:39.860 /dev/nbd14 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.860 1+0 records in 00:07:39.860 1+0 records out 00:07:39.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000947884 s, 4.3 MB/s 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.860 22:52:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd0", 00:07:40.122 "bdev_name": "Nvme0n1" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd1", 00:07:40.122 "bdev_name": "Nvme1n1p1" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd10", 00:07:40.122 "bdev_name": "Nvme1n1p2" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd11", 00:07:40.122 "bdev_name": "Nvme2n1" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd12", 00:07:40.122 "bdev_name": "Nvme2n2" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd13", 00:07:40.122 "bdev_name": "Nvme2n3" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd14", 00:07:40.122 "bdev_name": "Nvme3n1" 00:07:40.122 } 00:07:40.122 ]' 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd0", 00:07:40.122 "bdev_name": "Nvme0n1" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd1", 00:07:40.122 "bdev_name": "Nvme1n1p1" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd10", 00:07:40.122 "bdev_name": "Nvme1n1p2" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd11", 00:07:40.122 "bdev_name": "Nvme2n1" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd12", 00:07:40.122 "bdev_name": "Nvme2n2" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd13", 00:07:40.122 "bdev_name": "Nvme2n3" 00:07:40.122 }, 00:07:40.122 { 00:07:40.122 "nbd_device": "/dev/nbd14", 00:07:40.122 "bdev_name": "Nvme3n1" 00:07:40.122 } 00:07:40.122 ]' 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.122 /dev/nbd1 00:07:40.122 /dev/nbd10 00:07:40.122 /dev/nbd11 00:07:40.122 /dev/nbd12 00:07:40.122 /dev/nbd13 00:07:40.122 /dev/nbd14' 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.122 /dev/nbd1 00:07:40.122 /dev/nbd10 00:07:40.122 /dev/nbd11 00:07:40.122 /dev/nbd12 00:07:40.122 /dev/nbd13 00:07:40.122 /dev/nbd14' 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:40.122 256+0 records in 00:07:40.122 256+0 records out 00:07:40.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672931 s, 156 MB/s 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.122 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.383 256+0 records in 00:07:40.383 256+0 records out 00:07:40.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195234 s, 5.4 MB/s 00:07:40.383 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.383 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.383 256+0 records in 00:07:40.383 256+0 records out 00:07:40.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113983 s, 9.2 MB/s 00:07:40.383 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.383 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:40.644 256+0 records in 00:07:40.644 256+0 records out 00:07:40.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215672 s, 4.9 MB/s 00:07:40.644 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.644 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:40.906 256+0 records in 00:07:40.906 256+0 records out 00:07:40.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112367 s, 9.3 MB/s 00:07:40.906 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.906 22:52:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:40.906 256+0 records in 00:07:40.906 256+0 records out 00:07:40.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180408 s, 5.8 MB/s 00:07:40.906 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.906 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:41.164 256+0 records in 00:07:41.164 256+0 records out 00:07:41.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187659 s, 5.6 MB/s 00:07:41.164 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.164 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:41.423 256+0 records in 00:07:41.423 256+0 records out 00:07:41.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0990709 s, 10.6 MB/s 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.423 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.684 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.946 22:52:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:42.208 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:42.208 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:42.208 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:42.208 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.208 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.209 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:42.209 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.209 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.209 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.209 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:42.470 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.731 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.992 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.992 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.992 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.992 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.992 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.992 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.992 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:42.993 22:52:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:43.252 malloc_lvol_verify 00:07:43.252 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:43.513 9154d86f-75e8-420e-a78a-3c7adf145050 00:07:43.513 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:43.513 78aca7c9-455c-4f2d-a6aa-b49167aaa071 00:07:43.513 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:43.775 /dev/nbd0 00:07:43.775 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:43.776 mke2fs 1.47.0 (5-Feb-2023) 00:07:43.776 Discarding device blocks: 0/4096 done 00:07:43.776 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:43.776 00:07:43.776 Allocating group tables: 0/1 done 00:07:43.776 Writing inode tables: 0/1 done 00:07:43.776 Creating journal (1024 blocks): done 00:07:43.776 Writing superblocks and filesystem accounting information: 0/1 done 00:07:43.776 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.776 22:52:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74871 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74871 ']' 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74871 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74871 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.038 killing process with pid 74871 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74871' 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74871 00:07:44.038 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74871 00:07:44.300 22:52:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:44.300 00:07:44.300 real 0m9.988s 00:07:44.300 user 0m13.940s 00:07:44.300 sys 0m3.493s 00:07:44.300 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.300 ************************************ 00:07:44.300 END TEST bdev_nbd 00:07:44.300 ************************************ 00:07:44.300 22:52:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:44.301 22:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:44.301 22:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:44.301 skipping fio tests on NVMe due to multi-ns failures. 00:07:44.301 22:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:44.301 22:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:44.301 22:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:44.301 22:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:44.301 22:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:44.301 22:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.301 22:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.301 ************************************ 00:07:44.301 START TEST bdev_verify 00:07:44.301 ************************************ 00:07:44.301 22:52:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:44.301 [2024-11-26 22:52:23.360175] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:44.301 [2024-11-26 22:52:23.360278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75273 ] 00:07:44.562 [2024-11-26 22:52:23.493224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.562 [2024-11-26 22:52:23.523204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:44.562 [2024-11-26 22:52:23.549269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.562 [2024-11-26 22:52:23.549424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.135 Running I/O for 5 seconds... 00:07:47.121 20480.00 IOPS, 80.00 MiB/s [2024-11-26T22:52:27.635Z] 20640.00 IOPS, 80.62 MiB/s [2024-11-26T22:52:28.575Z] 21098.67 IOPS, 82.42 MiB/s [2024-11-26T22:52:29.146Z] 21712.00 IOPS, 84.81 MiB/s [2024-11-26T22:52:29.146Z] 21403.00 IOPS, 83.61 MiB/s 00:07:50.019 Latency(us) 00:07:50.019 [2024-11-26T22:52:29.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.019 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0x0 length 0xbd0bd 00:07:50.019 Nvme0n1 : 5.05 1445.97 5.65 0.00 0.00 88130.16 19358.33 85095.98 00:07:50.019 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:50.019 Nvme0n1 : 5.08 1562.56 6.10 0.00 0.00 81638.47 6150.30 87515.77 00:07:50.019 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0x0 length 0x4ff80 00:07:50.019 Nvme1n1p1 : 5.08 1449.95 5.66 0.00 0.00 87744.22 16938.54 77030.01 00:07:50.019 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:50.019 Nvme1n1p1 : 5.08 1560.71 6.10 0.00 0.00 81482.95 11292.36 72593.72 00:07:50.019 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0x0 length 0x4ff7f 00:07:50.019 Nvme1n1p2 : 5.08 1449.21 5.66 0.00 0.00 87630.50 17442.66 72593.72 00:07:50.019 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:50.020 Nvme1n1p2 : 5.09 1560.20 6.09 0.00 0.00 81330.51 11494.01 67350.84 00:07:50.020 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x0 length 0x80000 00:07:50.020 Nvme2n1 : 5.09 1446.05 5.65 0.00 0.00 87865.50 5595.77 73400.32 00:07:50.020 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x80000 length 0x80000 00:07:50.020 Nvme2n1 : 5.09 1559.78 6.09 0.00 0.00 81096.82 11544.42 66544.25 00:07:50.020 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x0 length 0x80000 00:07:50.020 Nvme2n2 : 5.09 1445.27 5.65 0.00 0.00 87705.32 7057.72 73400.32 00:07:50.020 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x80000 length 0x80000 00:07:50.020 Nvme2n2 : 5.09 1559.37 6.09 0.00 0.00 80987.28 16535.24 66947.54 00:07:50.020 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x0 length 0x80000 00:07:50.020 Nvme2n3 : 5.10 1444.42 5.64 0.00 0.00 87597.42 9275.86 73400.32 00:07:50.020 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x80000 length 0x80000 00:07:50.020 Nvme2n3 : 5.09 1558.52 6.09 0.00 0.00 80877.84 13308.85 68964.04 00:07:50.020 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x0 length 0x20000 00:07:50.020 Nvme3n1 : 5.10 1444.22 5.64 0.00 0.00 87395.56 2432.39 75013.51 00:07:50.020 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.020 Verification LBA range: start 0x20000 length 0x20000 00:07:50.020 Nvme3n1 : 5.10 1557.59 6.08 0.00 0.00 80847.82 10989.88 70577.23 00:07:50.020 [2024-11-26T22:52:29.147Z] =================================================================================================================== 00:07:50.020 [2024-11-26T22:52:29.147Z] Total : 21043.84 82.20 0.00 0.00 84326.89 2432.39 87515.77 00:07:52.575 00:07:52.575 real 0m7.797s 00:07:52.575 user 0m14.722s 00:07:52.575 sys 0m0.291s 00:07:52.575 22:52:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.575 ************************************ 00:07:52.575 END TEST bdev_verify 00:07:52.575 ************************************ 00:07:52.575 22:52:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:52.575 22:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:52.575 22:52:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:52.575 22:52:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.575 22:52:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.575 ************************************ 00:07:52.575 START TEST bdev_verify_big_io 00:07:52.575 ************************************ 00:07:52.575 22:52:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:52.575 [2024-11-26 22:52:31.249729] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:07:52.575 [2024-11-26 22:52:31.249910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75372 ] 00:07:52.575 [2024-11-26 22:52:31.389283] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.575 [2024-11-26 22:52:31.419086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.575 [2024-11-26 22:52:31.461820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.575 [2024-11-26 22:52:31.461896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.147 Running I/O for 5 seconds... 00:07:59.050 2193.00 IOPS, 137.06 MiB/s [2024-11-26T22:52:38.742Z] 3715.50 IOPS, 232.22 MiB/s [2024-11-26T22:52:39.000Z] 3484.67 IOPS, 217.79 MiB/s 00:07:59.873 Latency(us) 00:07:59.873 [2024-11-26T22:52:39.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.873 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x0 length 0xbd0b 00:07:59.873 Nvme0n1 : 5.80 110.72 6.92 0.00 0.00 1103290.24 20568.22 1103424.59 00:07:59.873 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:59.873 Nvme0n1 : 6.16 51.96 3.25 0.00 0.00 2298880.00 18249.26 2426243.54 00:07:59.873 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x0 length 0x4ff8 00:07:59.873 Nvme1n1p1 : 5.80 99.29 6.21 0.00 0.00 1191080.39 92355.35 1716438.25 00:07:59.873 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:59.873 Nvme1n1p1 : 6.10 79.66 4.98 0.00 0.00 1450940.90 29844.09 1690627.15 00:07:59.873 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x0 length 0x4ff7 00:07:59.873 Nvme1n1p2 : 5.87 113.38 7.09 0.00 0.00 1029728.92 113730.17 1445421.69 00:07:59.873 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:59.873 Nvme1n1p2 : 6.10 79.81 4.99 0.00 0.00 1351746.92 28835.84 1413157.81 00:07:59.873 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x0 length 0x8000 00:07:59.873 Nvme2n1 : 5.89 118.77 7.42 0.00 0.00 964291.81 61704.66 1458327.24 00:07:59.873 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x8000 length 0x8000 00:07:59.873 Nvme2n1 : 6.21 98.22 6.14 0.00 0.00 1056928.66 18551.73 1458327.24 00:07:59.873 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x0 length 0x8000 00:07:59.873 Nvme2n2 : 5.94 124.46 7.78 0.00 0.00 894046.25 26819.35 1219574.55 00:07:59.873 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x8000 length 0x8000 00:07:59.873 Nvme2n2 : 6.30 121.98 7.62 0.00 0.00 821322.50 20769.87 1484138.34 00:07:59.873 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x0 length 0x8000 00:07:59.873 Nvme2n3 : 5.98 129.06 8.07 0.00 0.00 836711.49 42346.34 1232480.10 00:07:59.873 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x8000 length 0x8000 00:07:59.873 Nvme2n3 : 6.53 192.43 12.03 0.00 0.00 497146.13 6654.42 1522854.99 00:07:59.873 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x0 length 0x2000 00:07:59.873 Nvme3n1 : 6.01 149.08 9.32 0.00 0.00 711685.35 1487.16 1109877.37 00:07:59.873 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:59.873 Verification LBA range: start 0x2000 length 0x2000 00:07:59.873 Nvme3n1 : 6.78 339.62 21.23 0.00 0.00 271037.36 460.01 1561571.64 00:07:59.873 [2024-11-26T22:52:39.000Z] =================================================================================================================== 00:07:59.873 [2024-11-26T22:52:39.000Z] Total : 1808.44 113.03 0.00 0.00 830729.34 460.01 2426243.54 00:08:00.807 00:08:00.807 real 0m8.601s 00:08:00.807 user 0m16.297s 00:08:00.807 sys 0m0.340s 00:08:00.807 22:52:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.807 22:52:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:00.807 ************************************ 00:08:00.807 END TEST bdev_verify_big_io 00:08:00.807 ************************************ 00:08:00.807 22:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.807 22:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:00.807 22:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.807 22:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.807 ************************************ 00:08:00.807 START TEST bdev_write_zeroes 00:08:00.807 ************************************ 00:08:00.807 22:52:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.807 [2024-11-26 22:52:39.874646] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:00.807 [2024-11-26 22:52:39.874755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75481 ] 00:08:01.066 [2024-11-26 22:52:40.006544] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.066 [2024-11-26 22:52:40.030324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.066 [2024-11-26 22:52:40.052126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.325 Running I/O for 1 seconds... 00:08:02.700 69440.00 IOPS, 271.25 MiB/s 00:08:02.700 Latency(us) 00:08:02.700 [2024-11-26T22:52:41.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.700 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.700 Nvme0n1 : 1.02 9880.09 38.59 0.00 0.00 12927.55 9376.69 25206.15 00:08:02.700 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.700 Nvme1n1p1 : 1.02 9867.95 38.55 0.00 0.00 12926.14 9628.75 25306.98 00:08:02.700 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.700 Nvme1n1p2 : 1.03 9855.93 38.50 0.00 0.00 12903.81 9477.51 24197.91 00:08:02.700 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.700 Nvme2n1 : 1.03 9844.80 38.46 0.00 0.00 12897.22 9225.45 23492.14 00:08:02.700 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.700 Nvme2n2 : 1.03 9833.79 38.41 0.00 0.00 12872.06 9679.16 22887.19 00:08:02.700 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.700 Nvme2n3 : 1.03 9822.77 38.37 0.00 0.00 12846.25 8065.97 23895.43 00:08:02.700 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.700 Nvme3n1 : 1.03 9811.82 38.33 0.00 0.00 12829.28 7007.31 25508.63 00:08:02.700 [2024-11-26T22:52:41.827Z] =================================================================================================================== 00:08:02.700 [2024-11-26T22:52:41.827Z] Total : 68917.15 269.21 0.00 0.00 12886.04 7007.31 25508.63 00:08:02.700 00:08:02.700 real 0m1.865s 00:08:02.700 user 0m1.573s 00:08:02.700 sys 0m0.184s 00:08:02.700 22:52:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.700 22:52:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:02.700 ************************************ 00:08:02.700 END TEST bdev_write_zeroes 00:08:02.700 ************************************ 00:08:02.700 22:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:02.700 22:52:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:02.700 22:52:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.700 22:52:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:02.700 ************************************ 00:08:02.700 START TEST bdev_json_nonenclosed 00:08:02.700 ************************************ 00:08:02.700 22:52:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:02.700 [2024-11-26 22:52:41.787374] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:02.700 [2024-11-26 22:52:41.787480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75522 ] 00:08:02.959 [2024-11-26 22:52:41.920805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.959 [2024-11-26 22:52:41.950740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.959 [2024-11-26 22:52:41.974887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.959 [2024-11-26 22:52:41.974969] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:02.959 [2024-11-26 22:52:41.974987] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:02.959 [2024-11-26 22:52:41.974996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.959 00:08:02.959 real 0m0.319s 00:08:02.959 user 0m0.123s 00:08:02.959 sys 0m0.094s 00:08:02.959 22:52:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.959 22:52:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 END TEST bdev_json_nonenclosed 00:08:02.959 ************************************ 00:08:02.959 22:52:42 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:02.959 22:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:02.959 22:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.959 22:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 START TEST bdev_json_nonarray 00:08:02.959 ************************************ 00:08:02.959 22:52:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.218 [2024-11-26 22:52:42.147489] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:03.218 [2024-11-26 22:52:42.147609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75543 ] 00:08:03.218 [2024-11-26 22:52:42.279866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.218 [2024-11-26 22:52:42.310418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.218 [2024-11-26 22:52:42.334330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.218 [2024-11-26 22:52:42.334419] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:03.218 [2024-11-26 22:52:42.334436] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:03.218 [2024-11-26 22:52:42.334447] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.477 00:08:03.477 real 0m0.321s 00:08:03.477 user 0m0.120s 00:08:03.477 sys 0m0.097s 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 ************************************ 00:08:03.477 END TEST bdev_json_nonarray 00:08:03.477 ************************************ 00:08:03.477 22:52:42 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:03.477 22:52:42 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:03.477 22:52:42 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:03.477 22:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.477 22:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.477 22:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 ************************************ 00:08:03.477 START TEST bdev_gpt_uuid 00:08:03.477 ************************************ 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75563 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75563 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75563 ']' 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.477 22:52:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 [2024-11-26 22:52:42.527152] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:03.477 [2024-11-26 22:52:42.527312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75563 ] 00:08:03.737 [2024-11-26 22:52:42.662332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.737 [2024-11-26 22:52:42.691163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.737 [2024-11-26 22:52:42.715594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.310 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.310 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:04.310 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:04.310 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.310 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.570 Some configs were skipped because the RPC state that can call them passed over. 00:08:04.570 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.570 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:04.570 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.570 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:04.832 { 00:08:04.832 "name": "Nvme1n1p1", 00:08:04.832 "aliases": [ 00:08:04.832 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:04.832 ], 00:08:04.832 "product_name": "GPT Disk", 00:08:04.832 "block_size": 4096, 00:08:04.832 "num_blocks": 655104, 00:08:04.832 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:04.832 "assigned_rate_limits": { 00:08:04.832 "rw_ios_per_sec": 0, 00:08:04.832 "rw_mbytes_per_sec": 0, 00:08:04.832 "r_mbytes_per_sec": 0, 00:08:04.832 "w_mbytes_per_sec": 0 00:08:04.832 }, 00:08:04.832 "claimed": false, 00:08:04.832 "zoned": false, 00:08:04.832 "supported_io_types": { 00:08:04.832 "read": true, 00:08:04.832 "write": true, 00:08:04.832 "unmap": true, 00:08:04.832 "flush": true, 00:08:04.832 "reset": true, 00:08:04.832 "nvme_admin": false, 00:08:04.832 "nvme_io": false, 00:08:04.832 "nvme_io_md": false, 00:08:04.832 "write_zeroes": true, 00:08:04.832 "zcopy": false, 00:08:04.832 "get_zone_info": false, 00:08:04.832 "zone_management": false, 00:08:04.832 "zone_append": false, 00:08:04.832 "compare": true, 00:08:04.832 "compare_and_write": false, 00:08:04.832 "abort": true, 00:08:04.832 "seek_hole": false, 00:08:04.832 "seek_data": false, 00:08:04.832 "copy": true, 00:08:04.832 "nvme_iov_md": false 00:08:04.832 }, 00:08:04.832 "driver_specific": { 00:08:04.832 "gpt": { 00:08:04.832 "base_bdev": "Nvme1n1", 00:08:04.832 "offset_blocks": 256, 00:08:04.832 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:04.832 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:04.832 "partition_name": "SPDK_TEST_first" 00:08:04.832 } 00:08:04.832 } 00:08:04.832 } 00:08:04.832 ]' 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.832 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:04.832 { 00:08:04.832 "name": "Nvme1n1p2", 00:08:04.832 "aliases": [ 00:08:04.832 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:04.832 ], 00:08:04.832 "product_name": "GPT Disk", 00:08:04.832 "block_size": 4096, 00:08:04.832 "num_blocks": 655103, 00:08:04.832 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:04.832 "assigned_rate_limits": { 00:08:04.832 "rw_ios_per_sec": 0, 00:08:04.832 "rw_mbytes_per_sec": 0, 00:08:04.832 "r_mbytes_per_sec": 0, 00:08:04.832 "w_mbytes_per_sec": 0 00:08:04.832 }, 00:08:04.832 "claimed": false, 00:08:04.832 "zoned": false, 00:08:04.832 "supported_io_types": { 00:08:04.832 "read": true, 00:08:04.832 "write": true, 00:08:04.832 "unmap": true, 00:08:04.832 "flush": true, 00:08:04.832 "reset": true, 00:08:04.832 "nvme_admin": false, 00:08:04.832 "nvme_io": false, 00:08:04.832 "nvme_io_md": false, 00:08:04.832 "write_zeroes": true, 00:08:04.832 "zcopy": false, 00:08:04.832 "get_zone_info": false, 00:08:04.832 "zone_management": false, 00:08:04.832 "zone_append": false, 00:08:04.832 "compare": true, 00:08:04.832 "compare_and_write": false, 00:08:04.832 "abort": true, 00:08:04.832 "seek_hole": false, 00:08:04.832 "seek_data": false, 00:08:04.832 "copy": true, 00:08:04.833 "nvme_iov_md": false 00:08:04.833 }, 00:08:04.833 "driver_specific": { 00:08:04.833 "gpt": { 00:08:04.833 "base_bdev": "Nvme1n1", 00:08:04.833 "offset_blocks": 655360, 00:08:04.833 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:04.833 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:04.833 "partition_name": "SPDK_TEST_second" 00:08:04.833 } 00:08:04.833 } 00:08:04.833 } 00:08:04.833 ]' 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 75563 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75563 ']' 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75563 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75563 00:08:04.833 killing process with pid 75563 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75563' 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75563 00:08:04.833 22:52:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75563 00:08:05.405 00:08:05.405 real 0m1.821s 00:08:05.405 user 0m1.942s 00:08:05.405 sys 0m0.386s 00:08:05.405 ************************************ 00:08:05.405 END TEST bdev_gpt_uuid 00:08:05.405 ************************************ 00:08:05.405 22:52:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.406 22:52:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:05.406 22:52:44 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:05.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:05.928 Waiting for block devices as requested 00:08:05.929 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:05.929 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:05.929 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.189 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.476 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:11.476 22:52:50 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:11.476 22:52:50 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:11.476 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:11.476 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:11.476 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:11.476 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:11.476 22:52:50 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:11.476 00:08:11.476 real 0m50.293s 00:08:11.476 user 1m4.454s 00:08:11.476 sys 0m7.920s 00:08:11.476 22:52:50 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.476 22:52:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.476 ************************************ 00:08:11.476 END TEST blockdev_nvme_gpt 00:08:11.476 ************************************ 00:08:11.476 22:52:50 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:11.476 22:52:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.476 22:52:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.476 22:52:50 -- common/autotest_common.sh@10 -- # set +x 00:08:11.476 ************************************ 00:08:11.476 START TEST nvme 00:08:11.476 ************************************ 00:08:11.476 22:52:50 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:11.737 * Looking for test storage... 00:08:11.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:11.737 22:52:50 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.737 22:52:50 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.737 22:52:50 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.737 22:52:50 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.737 22:52:50 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.737 22:52:50 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.737 22:52:50 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.737 22:52:50 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.737 22:52:50 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.737 22:52:50 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.737 22:52:50 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.737 22:52:50 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:11.737 22:52:50 nvme -- scripts/common.sh@345 -- # : 1 00:08:11.737 22:52:50 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.737 22:52:50 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.737 22:52:50 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:11.737 22:52:50 nvme -- scripts/common.sh@353 -- # local d=1 00:08:11.737 22:52:50 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.737 22:52:50 nvme -- scripts/common.sh@355 -- # echo 1 00:08:11.737 22:52:50 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.737 22:52:50 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@353 -- # local d=2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.737 22:52:50 nvme -- scripts/common.sh@355 -- # echo 2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.737 22:52:50 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.737 22:52:50 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.737 22:52:50 nvme -- scripts/common.sh@368 -- # return 0 00:08:11.737 22:52:50 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.737 22:52:50 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.737 --rc genhtml_branch_coverage=1 00:08:11.737 --rc genhtml_function_coverage=1 00:08:11.737 --rc genhtml_legend=1 00:08:11.737 --rc geninfo_all_blocks=1 00:08:11.737 --rc geninfo_unexecuted_blocks=1 00:08:11.738 00:08:11.738 ' 00:08:11.738 22:52:50 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.738 --rc genhtml_branch_coverage=1 00:08:11.738 --rc genhtml_function_coverage=1 00:08:11.738 --rc genhtml_legend=1 00:08:11.738 --rc geninfo_all_blocks=1 00:08:11.738 --rc geninfo_unexecuted_blocks=1 00:08:11.738 00:08:11.738 ' 00:08:11.738 22:52:50 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.738 --rc genhtml_branch_coverage=1 00:08:11.738 --rc genhtml_function_coverage=1 00:08:11.738 --rc genhtml_legend=1 00:08:11.738 --rc geninfo_all_blocks=1 00:08:11.738 --rc geninfo_unexecuted_blocks=1 00:08:11.738 00:08:11.738 ' 00:08:11.738 22:52:50 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.738 --rc genhtml_branch_coverage=1 00:08:11.738 --rc genhtml_function_coverage=1 00:08:11.738 --rc genhtml_legend=1 00:08:11.738 --rc geninfo_all_blocks=1 00:08:11.738 --rc geninfo_unexecuted_blocks=1 00:08:11.738 00:08:11.738 ' 00:08:11.738 22:52:50 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:11.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.611 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.611 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.870 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.870 22:52:51 nvme -- nvme/nvme.sh@79 -- # uname 00:08:12.870 22:52:51 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:12.870 22:52:51 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:12.870 22:52:51 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:12.870 22:52:51 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:12.870 22:52:51 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:12.870 22:52:51 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:12.870 22:52:51 nvme -- common/autotest_common.sh@1075 -- # stubpid=76186 00:08:12.870 22:52:51 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:12.870 Waiting for stub to ready for secondary processes... 00:08:12.870 22:52:51 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:12.870 22:52:51 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76186 ]] 00:08:12.871 22:52:51 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:12.871 22:52:51 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:12.871 [2024-11-26 22:52:51.810680] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:08:12.871 [2024-11-26 22:52:51.810806] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:13.813 [2024-11-26 22:52:52.771758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.813 22:52:52 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:13.813 22:52:52 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76186 ]] 00:08:13.813 22:52:52 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:13.813 [2024-11-26 22:52:52.798647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.813 [2024-11-26 22:52:52.821411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.813 [2024-11-26 22:52:52.821574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.813 [2024-11-26 22:52:52.821641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.813 [2024-11-26 22:52:52.837941] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:13.813 [2024-11-26 22:52:52.837997] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:13.813 [2024-11-26 22:52:52.855060] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:13.813 [2024-11-26 22:52:52.855308] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:13.813 [2024-11-26 22:52:52.857598] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:13.813 [2024-11-26 22:52:52.857804] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:13.813 [2024-11-26 22:52:52.857874] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:13.813 [2024-11-26 22:52:52.858941] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:13.813 [2024-11-26 22:52:52.859142] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:13.813 [2024-11-26 22:52:52.859232] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:13.813 [2024-11-26 22:52:52.860705] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:13.813 [2024-11-26 22:52:52.860891] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:13.813 [2024-11-26 22:52:52.860952] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:13.813 [2024-11-26 22:52:52.861028] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:13.813 [2024-11-26 22:52:52.861120] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:14.755 22:52:53 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:14.755 22:52:53 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:14.755 done. 00:08:14.755 22:52:53 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:14.755 22:52:53 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:14.755 22:52:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.755 22:52:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.755 ************************************ 00:08:14.755 START TEST nvme_reset 00:08:14.755 ************************************ 00:08:14.755 22:52:53 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:15.016 Initializing NVMe Controllers 00:08:15.016 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:15.016 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:15.016 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:15.016 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:15.016 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:15.016 00:08:15.016 real 0m0.235s 00:08:15.016 user 0m0.071s 00:08:15.016 sys 0m0.114s 00:08:15.016 22:52:54 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.016 ************************************ 00:08:15.016 END TEST nvme_reset 00:08:15.016 ************************************ 00:08:15.016 22:52:54 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:15.016 22:52:54 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:15.016 22:52:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.016 22:52:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.016 22:52:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.016 ************************************ 00:08:15.016 START TEST nvme_identify 00:08:15.016 ************************************ 00:08:15.016 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:15.016 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:15.016 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:15.016 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:15.016 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:15.016 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:15.016 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:15.016 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:15.016 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:15.017 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:15.280 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:15.280 22:52:54 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:15.280 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:15.280 [2024-11-26 22:52:54.375261] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76220 terminated unexpected 00:08:15.280 ===================================================== 00:08:15.280 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.280 ===================================================== 00:08:15.280 Controller Capabilities/Features 00:08:15.280 ================================ 00:08:15.280 Vendor ID: 1b36 00:08:15.280 Subsystem Vendor ID: 1af4 00:08:15.280 Serial Number: 12340 00:08:15.280 Model Number: QEMU NVMe Ctrl 00:08:15.280 Firmware Version: 8.0.0 00:08:15.280 Recommended Arb Burst: 6 00:08:15.280 IEEE OUI Identifier: 00 54 52 00:08:15.280 Multi-path I/O 00:08:15.280 May have multiple subsystem ports: No 00:08:15.280 May have multiple controllers: No 00:08:15.280 Associated with SR-IOV VF: No 00:08:15.280 Max Data Transfer Size: 524288 00:08:15.280 Max Number of Namespaces: 256 00:08:15.280 Max Number of I/O Queues: 64 00:08:15.280 NVMe Specification Version (VS): 1.4 00:08:15.280 NVMe Specification Version (Identify): 1.4 00:08:15.280 Maximum Queue Entries: 2048 00:08:15.280 Contiguous Queues Required: Yes 00:08:15.280 Arbitration Mechanisms Supported 00:08:15.280 Weighted Round Robin: Not Supported 00:08:15.280 Vendor Specific: Not Supported 00:08:15.280 Reset Timeout: 7500 ms 00:08:15.280 Doorbell Stride: 4 bytes 00:08:15.280 NVM Subsystem Reset: Not Supported 00:08:15.281 Command Sets Supported 00:08:15.281 NVM Command Set: Supported 00:08:15.281 Boot Partition: Not Supported 00:08:15.281 Memory Page Size Minimum: 4096 bytes 00:08:15.281 Memory Page Size Maximum: 65536 bytes 00:08:15.281 Persistent Memory Region: Not Supported 00:08:15.281 Optional Asynchronous Events Supported 00:08:15.281 Namespace Attribute Notices: Supported 00:08:15.281 Firmware Activation Notices: Not Supported 00:08:15.281 ANA Change Notices: Not Supported 00:08:15.281 PLE Aggregate Log Change Notices: Not Supported 00:08:15.281 LBA Status Info Alert Notices: Not Supported 00:08:15.281 EGE Aggregate Log Change Notices: Not Supported 00:08:15.281 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.281 Zone Descriptor Change Notices: Not Supported 00:08:15.281 Discovery Log Change Notices: Not Supported 00:08:15.281 Controller Attributes 00:08:15.281 128-bit Host Identifier: Not Supported 00:08:15.281 Non-Operational Permissive Mode: Not Supported 00:08:15.281 NVM Sets: Not Supported 00:08:15.281 Read Recovery Levels: Not Supported 00:08:15.281 Endurance Groups: Not Supported 00:08:15.281 Predictable Latency Mode: Not Supported 00:08:15.281 Traffic Based Keep ALive: Not Supported 00:08:15.281 Namespace Granularity: Not Supported 00:08:15.281 SQ Associations: Not Supported 00:08:15.281 UUID List: Not Supported 00:08:15.281 Multi-Domain Subsystem: Not Supported 00:08:15.281 Fixed Capacity Management: Not Supported 00:08:15.281 Variable Capacity Management: Not Supported 00:08:15.281 Delete Endurance Group: Not Supported 00:08:15.281 Delete NVM Set: Not Supported 00:08:15.281 Extended LBA Formats Supported: Supported 00:08:15.281 Flexible Data Placement Supported: Not Supported 00:08:15.281 00:08:15.281 Controller Memory Buffer Support 00:08:15.281 ================================ 00:08:15.281 Supported: No 00:08:15.281 00:08:15.281 Persistent Memory Region Support 00:08:15.281 ================================ 00:08:15.281 Supported: No 00:08:15.281 00:08:15.281 Admin Command Set Attributes 00:08:15.281 ============================ 00:08:15.281 Security Send/Receive: Not Supported 00:08:15.281 Format NVM: Supported 00:08:15.281 Firmware Activate/Download: Not Supported 00:08:15.281 Namespace Management: Supported 00:08:15.281 Device Self-Test: Not Supported 00:08:15.281 Directives: Supported 00:08:15.281 NVMe-MI: Not Supported 00:08:15.281 Virtualization Management: Not Supported 00:08:15.281 Doorbell Buffer Config: Supported 00:08:15.281 Get LBA Status Capability: Not Supported 00:08:15.281 Command & Feature Lockdown Capability: Not Supported 00:08:15.281 Abort Command Limit: 4 00:08:15.281 Async Event Request Limit: 4 00:08:15.281 Number of Firmware Slots: N/A 00:08:15.281 Firmware Slot 1 Read-Only: N/A 00:08:15.281 Firmware Activation Without Reset: N/A 00:08:15.281 Multiple Update Detection Support: N/A 00:08:15.281 Firmware Update Granularity: No Information Provided 00:08:15.281 Per-Namespace SMART Log: Yes 00:08:15.281 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.281 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:15.281 Command Effects Log Page: Supported 00:08:15.281 Get Log Page Extended Data: Supported 00:08:15.281 Telemetry Log Pages: Not Supported 00:08:15.281 Persistent Event Log Pages: Not Supported 00:08:15.281 Supported Log Pages Log Page: May Support 00:08:15.281 Commands Supported & Effects Log Page: Not Supported 00:08:15.281 Feature Identifiers & Effects Log Page:May Support 00:08:15.281 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.281 Data Area 4 for Telemetry Log: Not Supported 00:08:15.281 Error Log Page Entries Supported: 1 00:08:15.281 Keep Alive: Not Supported 00:08:15.281 00:08:15.281 NVM Command Set Attributes 00:08:15.281 ========================== 00:08:15.281 Submission Queue Entry Size 00:08:15.281 Max: 64 00:08:15.281 Min: 64 00:08:15.281 Completion Queue Entry Size 00:08:15.281 Max: 16 00:08:15.281 Min: 16 00:08:15.281 Number of Namespaces: 256 00:08:15.281 Compare Command: Supported 00:08:15.281 Write Uncorrectable Command: Not Supported 00:08:15.281 Dataset Management Command: Supported 00:08:15.281 Write Zeroes Command: Supported 00:08:15.281 Set Features Save Field: Supported 00:08:15.281 Reservations: Not Supported 00:08:15.281 Timestamp: Supported 00:08:15.281 Copy: Supported 00:08:15.281 Volatile Write Cache: Present 00:08:15.281 Atomic Write Unit (Normal): 1 00:08:15.281 Atomic Write Unit (PFail): 1 00:08:15.281 Atomic Compare & Write Unit: 1 00:08:15.281 Fused Compare & Write: Not Supported 00:08:15.281 Scatter-Gather List 00:08:15.281 SGL Command Set: Supported 00:08:15.281 SGL Keyed: Not Supported 00:08:15.281 SGL Bit Bucket Descriptor: Not Supported 00:08:15.281 SGL Metadata Pointer: Not Supported 00:08:15.281 Oversized SGL: Not Supported 00:08:15.281 SGL Metadata Address: Not Supported 00:08:15.281 SGL Offset: Not Supported 00:08:15.281 Transport SGL Data Block: Not Supported 00:08:15.281 Replay Protected Memory Block: Not Supported 00:08:15.281 00:08:15.281 Firmware Slot Information 00:08:15.281 ========================= 00:08:15.281 Active slot: 1 00:08:15.281 Slot 1 Firmware Revision: 1.0 00:08:15.281 00:08:15.281 00:08:15.281 Commands Supported and Effects 00:08:15.281 ============================== 00:08:15.281 Admin Commands 00:08:15.281 -------------- 00:08:15.281 Delete I/O Submission Queue (00h): Supported 00:08:15.281 Create I/O Submission Queue (01h): Supported 00:08:15.281 Get Log Page (02h): Supported 00:08:15.281 Delete I/O Completion Queue (04h): Supported 00:08:15.281 Create I/O Completion Queue (05h): Supported 00:08:15.281 Identify (06h): Supported 00:08:15.281 Abort (08h): Supported 00:08:15.281 Set Features (09h): Supported 00:08:15.281 Get Features (0Ah): Supported 00:08:15.281 Asynchronous Event Request (0Ch): Supported 00:08:15.281 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.281 Directive Send (19h): Supported 00:08:15.281 Directive Receive (1Ah): Supported 00:08:15.281 Virtualization Management (1Ch): Supported 00:08:15.281 Doorbell Buffer Config (7Ch): Supported 00:08:15.281 Format NVM (80h): Supported LBA-Change 00:08:15.281 I/O Commands 00:08:15.281 ------------ 00:08:15.281 Flush (00h): Supported LBA-Change 00:08:15.281 Write (01h): Supported LBA-Change 00:08:15.281 Read (02h): Supported 00:08:15.281 Compare (05h): Supported 00:08:15.281 Write Zeroes (08h): Supported LBA-Change 00:08:15.281 Dataset Management (09h): Supported LBA-Change 00:08:15.281 Unknown (0Ch): Supported 00:08:15.281 Unknown (12h): Supported 00:08:15.281 Copy (19h): Supported LBA-Change 00:08:15.281 Unknown (1Dh): Supported LBA-Change 00:08:15.281 00:08:15.281 Error Log 00:08:15.281 ========= 00:08:15.281 00:08:15.281 Arbitration 00:08:15.281 =========== 00:08:15.281 Arbitration Burst: no limit 00:08:15.281 00:08:15.281 Power Management 00:08:15.281 ================ 00:08:15.281 Number of Power States: 1 00:08:15.281 Current Power State: Power State #0 00:08:15.281 Power State #0: 00:08:15.281 Max Power: 25.00 W 00:08:15.281 Non-Operational State: Operational 00:08:15.281 Entry Latency: 16 microseconds 00:08:15.281 Exit Latency: 4 microseconds 00:08:15.281 Relative Read Throughput: 0 00:08:15.281 Relative Read Latency: 0 00:08:15.281 Relative Write Throughput: 0 00:08:15.281 Relative Write Latency: 0 00:08:15.281 Idle Power[2024-11-26 22:52:54.377033] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76220 terminated unexpected 00:08:15.281 : Not Reported 00:08:15.281 Active Power: Not Reported 00:08:15.281 Non-Operational Permissive Mode: Not Supported 00:08:15.281 00:08:15.281 Health Information 00:08:15.281 ================== 00:08:15.281 Critical Warnings: 00:08:15.281 Available Spare Space: OK 00:08:15.281 Temperature: OK 00:08:15.281 Device Reliability: OK 00:08:15.281 Read Only: No 00:08:15.281 Volatile Memory Backup: OK 00:08:15.281 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.281 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.281 Available Spare: 0% 00:08:15.281 Available Spare Threshold: 0% 00:08:15.281 Life Percentage Used: 0% 00:08:15.281 Data Units Read: 619 00:08:15.281 Data Units Written: 547 00:08:15.281 Host Read Commands: 35574 00:08:15.281 Host Write Commands: 35360 00:08:15.281 Controller Busy Time: 0 minutes 00:08:15.281 Power Cycles: 0 00:08:15.281 Power On Hours: 0 hours 00:08:15.281 Unsafe Shutdowns: 0 00:08:15.281 Unrecoverable Media Errors: 0 00:08:15.281 Lifetime Error Log Entries: 0 00:08:15.282 Warning Temperature Time: 0 minutes 00:08:15.282 Critical Temperature Time: 0 minutes 00:08:15.282 00:08:15.282 Number of Queues 00:08:15.282 ================ 00:08:15.282 Number of I/O Submission Queues: 64 00:08:15.282 Number of I/O Completion Queues: 64 00:08:15.282 00:08:15.282 ZNS Specific Controller Data 00:08:15.282 ============================ 00:08:15.282 Zone Append Size Limit: 0 00:08:15.282 00:08:15.282 00:08:15.282 Active Namespaces 00:08:15.282 ================= 00:08:15.282 Namespace ID:1 00:08:15.282 Error Recovery Timeout: Unlimited 00:08:15.282 Command Set Identifier: NVM (00h) 00:08:15.282 Deallocate: Supported 00:08:15.282 Deallocated/Unwritten Error: Supported 00:08:15.282 Deallocated Read Value: All 0x00 00:08:15.282 Deallocate in Write Zeroes: Not Supported 00:08:15.282 Deallocated Guard Field: 0xFFFF 00:08:15.282 Flush: Supported 00:08:15.282 Reservation: Not Supported 00:08:15.282 Metadata Transferred as: Separate Metadata Buffer 00:08:15.282 Namespace Sharing Capabilities: Private 00:08:15.282 Size (in LBAs): 1548666 (5GiB) 00:08:15.282 Capacity (in LBAs): 1548666 (5GiB) 00:08:15.282 Utilization (in LBAs): 1548666 (5GiB) 00:08:15.282 Thin Provisioning: Not Supported 00:08:15.282 Per-NS Atomic Units: No 00:08:15.282 Maximum Single Source Range Length: 128 00:08:15.282 Maximum Copy Length: 128 00:08:15.282 Maximum Source Range Count: 128 00:08:15.282 NGUID/EUI64 Never Reused: No 00:08:15.282 Namespace Write Protected: No 00:08:15.282 Number of LBA Formats: 8 00:08:15.282 Current LBA Format: LBA Format #07 00:08:15.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.282 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.282 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.282 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.282 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.282 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.282 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.282 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.282 00:08:15.282 NVM Specific Namespace Data 00:08:15.282 =========================== 00:08:15.282 Logical Block Storage Tag Mask: 0 00:08:15.282 Protection Information Capabilities: 00:08:15.282 16b Guard Protection Information Storage Tag Support: No 00:08:15.282 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.282 Storage Tag Check Read Support: No 00:08:15.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.282 ===================================================== 00:08:15.282 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.282 ===================================================== 00:08:15.282 Controller Capabilities/Features 00:08:15.282 ================================ 00:08:15.282 Vendor ID: 1b36 00:08:15.282 Subsystem Vendor ID: 1af4 00:08:15.282 Serial Number: 12341 00:08:15.282 Model Number: QEMU NVMe Ctrl 00:08:15.282 Firmware Version: 8.0.0 00:08:15.282 Recommended Arb Burst: 6 00:08:15.282 IEEE OUI Identifier: 00 54 52 00:08:15.282 Multi-path I/O 00:08:15.282 May have multiple subsystem ports: No 00:08:15.282 May have multiple controllers: No 00:08:15.282 Associated with SR-IOV VF: No 00:08:15.282 Max Data Transfer Size: 524288 00:08:15.282 Max Number of Namespaces: 256 00:08:15.282 Max Number of I/O Queues: 64 00:08:15.282 NVMe Specification Version (VS): 1.4 00:08:15.282 NVMe Specification Version (Identify): 1.4 00:08:15.282 Maximum Queue Entries: 2048 00:08:15.282 Contiguous Queues Required: Yes 00:08:15.282 Arbitration Mechanisms Supported 00:08:15.282 Weighted Round Robin: Not Supported 00:08:15.282 Vendor Specific: Not Supported 00:08:15.282 Reset Timeout: 7500 ms 00:08:15.282 Doorbell Stride: 4 bytes 00:08:15.282 NVM Subsystem Reset: Not Supported 00:08:15.282 Command Sets Supported 00:08:15.282 NVM Command Set: Supported 00:08:15.282 Boot Partition: Not Supported 00:08:15.282 Memory Page Size Minimum: 4096 bytes 00:08:15.282 Memory Page Size Maximum: 65536 bytes 00:08:15.282 Persistent Memory Region: Not Supported 00:08:15.282 Optional Asynchronous Events Supported 00:08:15.282 Namespace Attribute Notices: Supported 00:08:15.282 Firmware Activation Notices: Not Supported 00:08:15.282 ANA Change Notices: Not Supported 00:08:15.282 PLE Aggregate Log Change Notices: Not Supported 00:08:15.282 LBA Status Info Alert Notices: Not Supported 00:08:15.282 EGE Aggregate Log Change Notices: Not Supported 00:08:15.282 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.282 Zone Descriptor Change Notices: Not Supported 00:08:15.282 Discovery Log Change Notices: Not Supported 00:08:15.282 Controller Attributes 00:08:15.282 128-bit Host Identifier: Not Supported 00:08:15.282 Non-Operational Permissive Mode: Not Supported 00:08:15.282 NVM Sets: Not Supported 00:08:15.282 Read Recovery Levels: Not Supported 00:08:15.282 Endurance Groups: Not Supported 00:08:15.282 Predictable Latency Mode: Not Supported 00:08:15.282 Traffic Based Keep ALive: Not Supported 00:08:15.282 Namespace Granularity: Not Supported 00:08:15.282 SQ Associations: Not Supported 00:08:15.282 UUID List: Not Supported 00:08:15.282 Multi-Domain Subsystem: Not Supported 00:08:15.282 Fixed Capacity Management: Not Supported 00:08:15.282 Variable Capacity Management: Not Supported 00:08:15.282 Delete Endurance Group: Not Supported 00:08:15.282 Delete NVM Set: Not Supported 00:08:15.282 Extended LBA Formats Supported: Supported 00:08:15.282 Flexible Data Placement Supported: Not Supported 00:08:15.282 00:08:15.282 Controller Memory Buffer Support 00:08:15.282 ================================ 00:08:15.282 Supported: No 00:08:15.282 00:08:15.282 Persistent Memory Region Support 00:08:15.282 ================================ 00:08:15.282 Supported: No 00:08:15.282 00:08:15.282 Admin Command Set Attributes 00:08:15.282 ============================ 00:08:15.282 Security Send/Receive: Not Supported 00:08:15.282 Format NVM: Supported 00:08:15.282 Firmware Activate/Download: Not Supported 00:08:15.282 Namespace Management: Supported 00:08:15.282 Device Self-Test: Not Supported 00:08:15.282 Directives: Supported 00:08:15.282 NVMe-MI: Not Supported 00:08:15.282 Virtualization Management: Not Supported 00:08:15.282 Doorbell Buffer Config: Supported 00:08:15.282 Get LBA Status Capability: Not Supported 00:08:15.282 Command & Feature Lockdown Capability: Not Supported 00:08:15.282 Abort Command Limit: 4 00:08:15.282 Async Event Request Limit: 4 00:08:15.282 Number of Firmware Slots: N/A 00:08:15.282 Firmware Slot 1 Read-Only: N/A 00:08:15.282 Firmware Activation Without Reset: N/A 00:08:15.282 Multiple Update Detection Support: N/A 00:08:15.282 Firmware Update Granularity: No Information Provided 00:08:15.282 Per-Namespace SMART Log: Yes 00:08:15.282 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.282 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:15.282 Command Effects Log Page: Supported 00:08:15.282 Get Log Page Extended Data: Supported 00:08:15.282 Telemetry Log Pages: Not Supported 00:08:15.282 Persistent Event Log Pages: Not Supported 00:08:15.282 Supported Log Pages Log Page: May Support 00:08:15.282 Commands Supported & Effects Log Page: Not Supported 00:08:15.282 Feature Identifiers & Effects Log Page:May Support 00:08:15.282 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.282 Data Area 4 for Telemetry Log: Not Supported 00:08:15.282 Error Log Page Entries Supported: 1 00:08:15.282 Keep Alive: Not Supported 00:08:15.282 00:08:15.282 NVM Command Set Attributes 00:08:15.282 ========================== 00:08:15.282 Submission Queue Entry Size 00:08:15.282 Max: 64 00:08:15.282 Min: 64 00:08:15.282 Completion Queue Entry Size 00:08:15.282 Max: 16 00:08:15.282 Min: 16 00:08:15.282 Number of Namespaces: 256 00:08:15.282 Compare Command: Supported 00:08:15.282 Write Uncorrectable Command: Not Supported 00:08:15.282 Dataset Management Command: Supported 00:08:15.282 Write Zeroes Command: Supported 00:08:15.282 Set Features Save Field: Supported 00:08:15.282 Reservations: Not Supported 00:08:15.282 Timestamp: Supported 00:08:15.282 Copy: Supported 00:08:15.282 Volatile Write Cache: Present 00:08:15.282 Atomic Write Unit (Normal): 1 00:08:15.282 Atomic Write Unit (PFail): 1 00:08:15.282 Atomic Compare & Write Unit: 1 00:08:15.282 Fused Compare & Write: Not Supported 00:08:15.282 Scatter-Gather List 00:08:15.283 SGL Command Set: Supported 00:08:15.283 SGL Keyed: Not Supported 00:08:15.283 SGL Bit Bucket Descriptor: Not Supported 00:08:15.283 SGL Metadata Pointer: Not Supported 00:08:15.283 Oversized SGL: Not Supported 00:08:15.283 SGL Metadata Address: Not Supported 00:08:15.283 SGL Offset: Not Supported 00:08:15.283 Transport SGL Data Block: Not Supported 00:08:15.283 Replay Protected Memory Block: Not Supported 00:08:15.283 00:08:15.283 Firmware Slot Information 00:08:15.283 ========================= 00:08:15.283 Active slot: 1 00:08:15.283 Slot 1 Firmware Revision: 1.0 00:08:15.283 00:08:15.283 00:08:15.283 Commands Supported and Effects 00:08:15.283 ============================== 00:08:15.283 Admin Commands 00:08:15.283 -------------- 00:08:15.283 Delete I/O Submission Queue (00h): Supported 00:08:15.283 Create I/O Submission Queue (01h): Supported 00:08:15.283 Get Log Page (02h): Supported 00:08:15.283 Delete I/O Completion Queue (04h): Supported 00:08:15.283 Create I/O Completion Queue (05h): Supported 00:08:15.283 Identify (06h): Supported 00:08:15.283 Abort (08h): Supported 00:08:15.283 Set Features (09h): Supported 00:08:15.283 Get Features (0Ah): Supported 00:08:15.283 Asynchronous Event Request (0Ch): Supported 00:08:15.283 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.283 Directive Send (19h): Supported 00:08:15.283 Directive Receive (1Ah): Supported 00:08:15.283 Virtualization Management (1Ch): Supported 00:08:15.283 Doorbell Buffer Config (7Ch): Supported 00:08:15.283 Format NVM (80h): Supported LBA-Change 00:08:15.283 I/O Commands 00:08:15.283 ------------ 00:08:15.283 Flush (00h): Supported LBA-Change 00:08:15.283 Write (01h): Supported LBA-Change 00:08:15.283 Read (02h): Supported 00:08:15.283 Compare (05h): Supported 00:08:15.283 Write Zeroes (08h): Supported LBA-Change 00:08:15.283 Dataset Management (09h): Supported LBA-Change 00:08:15.283 Unknown (0Ch): Supported 00:08:15.283 Unknown (12h): Supported 00:08:15.283 Copy (19h): Supported LBA-Change 00:08:15.283 Unknown (1Dh): Supported LBA-Change 00:08:15.283 00:08:15.283 Error Log 00:08:15.283 ========= 00:08:15.283 00:08:15.283 Arbitration 00:08:15.283 =========== 00:08:15.283 Arbitration Burst: no limit 00:08:15.283 00:08:15.283 Power Management 00:08:15.283 ================ 00:08:15.283 Number of Power States: 1 00:08:15.283 Current Power State: Power State #0 00:08:15.283 Power State #0: 00:08:15.283 Max Power: 25.00 W 00:08:15.283 Non-Operational State: Operational 00:08:15.283 Entry Latency: 16 microseconds 00:08:15.283 Exit Latency: 4 microseconds 00:08:15.283 Relative Read Throughput: 0 00:08:15.283 Relative Read Latency: 0 00:08:15.283 Relative Write Throughput: 0 00:08:15.283 Relative Write Latency: 0 00:08:15.283 Idle Power: Not Reported 00:08:15.283 Active Power: Not Reported 00:08:15.283 Non-Operational Permissive Mode: Not Supported 00:08:15.283 00:08:15.283 Health Information 00:08:15.283 ================== 00:08:15.283 Critical Warnings: 00:08:15.283 Available Spare Space: OK 00:08:15.283 Temperature: [2024-11-26 22:52:54.378134] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76220 terminated unexpected 00:08:15.283 OK 00:08:15.283 Device Reliability: OK 00:08:15.283 Read Only: No 00:08:15.283 Volatile Memory Backup: OK 00:08:15.283 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.283 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.283 Available Spare: 0% 00:08:15.283 Available Spare Threshold: 0% 00:08:15.283 Life Percentage Used: 0% 00:08:15.283 Data Units Read: 981 00:08:15.283 Data Units Written: 848 00:08:15.283 Host Read Commands: 53320 00:08:15.283 Host Write Commands: 52115 00:08:15.283 Controller Busy Time: 0 minutes 00:08:15.283 Power Cycles: 0 00:08:15.283 Power On Hours: 0 hours 00:08:15.283 Unsafe Shutdowns: 0 00:08:15.283 Unrecoverable Media Errors: 0 00:08:15.283 Lifetime Error Log Entries: 0 00:08:15.283 Warning Temperature Time: 0 minutes 00:08:15.283 Critical Temperature Time: 0 minutes 00:08:15.283 00:08:15.283 Number of Queues 00:08:15.283 ================ 00:08:15.283 Number of I/O Submission Queues: 64 00:08:15.283 Number of I/O Completion Queues: 64 00:08:15.283 00:08:15.283 ZNS Specific Controller Data 00:08:15.283 ============================ 00:08:15.283 Zone Append Size Limit: 0 00:08:15.283 00:08:15.283 00:08:15.283 Active Namespaces 00:08:15.283 ================= 00:08:15.283 Namespace ID:1 00:08:15.283 Error Recovery Timeout: Unlimited 00:08:15.283 Command Set Identifier: NVM (00h) 00:08:15.283 Deallocate: Supported 00:08:15.283 Deallocated/Unwritten Error: Supported 00:08:15.283 Deallocated Read Value: All 0x00 00:08:15.283 Deallocate in Write Zeroes: Not Supported 00:08:15.283 Deallocated Guard Field: 0xFFFF 00:08:15.283 Flush: Supported 00:08:15.283 Reservation: Not Supported 00:08:15.283 Namespace Sharing Capabilities: Private 00:08:15.283 Size (in LBAs): 1310720 (5GiB) 00:08:15.283 Capacity (in LBAs): 1310720 (5GiB) 00:08:15.283 Utilization (in LBAs): 1310720 (5GiB) 00:08:15.283 Thin Provisioning: Not Supported 00:08:15.283 Per-NS Atomic Units: No 00:08:15.283 Maximum Single Source Range Length: 128 00:08:15.283 Maximum Copy Length: 128 00:08:15.283 Maximum Source Range Count: 128 00:08:15.283 NGUID/EUI64 Never Reused: No 00:08:15.283 Namespace Write Protected: No 00:08:15.283 Number of LBA Formats: 8 00:08:15.283 Current LBA Format: LBA Format #04 00:08:15.283 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.283 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.283 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.283 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.283 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.283 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.283 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.283 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.283 00:08:15.283 NVM Specific Namespace Data 00:08:15.283 =========================== 00:08:15.283 Logical Block Storage Tag Mask: 0 00:08:15.283 Protection Information Capabilities: 00:08:15.283 16b Guard Protection Information Storage Tag Support: No 00:08:15.283 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.283 Storage Tag Check Read Support: No 00:08:15.283 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.283 ===================================================== 00:08:15.283 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.283 ===================================================== 00:08:15.283 Controller Capabilities/Features 00:08:15.283 ================================ 00:08:15.283 Vendor ID: 1b36 00:08:15.283 Subsystem Vendor ID: 1af4 00:08:15.283 Serial Number: 12343 00:08:15.283 Model Number: QEMU NVMe Ctrl 00:08:15.283 Firmware Version: 8.0.0 00:08:15.283 Recommended Arb Burst: 6 00:08:15.283 IEEE OUI Identifier: 00 54 52 00:08:15.283 Multi-path I/O 00:08:15.283 May have multiple subsystem ports: No 00:08:15.283 May have multiple controllers: Yes 00:08:15.283 Associated with SR-IOV VF: No 00:08:15.283 Max Data Transfer Size: 524288 00:08:15.283 Max Number of Namespaces: 256 00:08:15.283 Max Number of I/O Queues: 64 00:08:15.283 NVMe Specification Version (VS): 1.4 00:08:15.283 NVMe Specification Version (Identify): 1.4 00:08:15.283 Maximum Queue Entries: 2048 00:08:15.283 Contiguous Queues Required: Yes 00:08:15.283 Arbitration Mechanisms Supported 00:08:15.283 Weighted Round Robin: Not Supported 00:08:15.283 Vendor Specific: Not Supported 00:08:15.283 Reset Timeout: 7500 ms 00:08:15.283 Doorbell Stride: 4 bytes 00:08:15.283 NVM Subsystem Reset: Not Supported 00:08:15.283 Command Sets Supported 00:08:15.283 NVM Command Set: Supported 00:08:15.283 Boot Partition: Not Supported 00:08:15.283 Memory Page Size Minimum: 4096 bytes 00:08:15.283 Memory Page Size Maximum: 65536 bytes 00:08:15.283 Persistent Memory Region: Not Supported 00:08:15.283 Optional Asynchronous Events Supported 00:08:15.283 Namespace Attribute Notices: Supported 00:08:15.283 Firmware Activation Notices: Not Supported 00:08:15.283 ANA Change Notices: Not Supported 00:08:15.283 PLE Aggregate Log Change Notices: Not Supported 00:08:15.283 LBA Status Info Alert Notices: Not Supported 00:08:15.283 EGE Aggregate Log Change Notices: Not Supported 00:08:15.283 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.284 Zone Descriptor Change Notices: Not Supported 00:08:15.284 Discovery Log Change Notices: Not Supported 00:08:15.284 Controller Attributes 00:08:15.284 128-bit Host Identifier: Not Supported 00:08:15.284 Non-Operational Permissive Mode: Not Supported 00:08:15.284 NVM Sets: Not Supported 00:08:15.284 Read Recovery Levels: Not Supported 00:08:15.284 Endurance Groups: Supported 00:08:15.284 Predictable Latency Mode: Not Supported 00:08:15.284 Traffic Based Keep ALive: Not Supported 00:08:15.284 Namespace Granularity: Not Supported 00:08:15.284 SQ Associations: Not Supported 00:08:15.284 UUID List: Not Supported 00:08:15.284 Multi-Domain Subsystem: Not Supported 00:08:15.284 Fixed Capacity Management: Not Supported 00:08:15.284 Variable Capacity Management: Not Supported 00:08:15.284 Delete Endurance Group: Not Supported 00:08:15.284 Delete NVM Set: Not Supported 00:08:15.284 Extended LBA Formats Supported: Supported 00:08:15.284 Flexible Data Placement Supported: Supported 00:08:15.284 00:08:15.284 Controller Memory Buffer Support 00:08:15.284 ================================ 00:08:15.284 Supported: No 00:08:15.284 00:08:15.284 Persistent Memory Region Support 00:08:15.284 ================================ 00:08:15.284 Supported: No 00:08:15.284 00:08:15.284 Admin Command Set Attributes 00:08:15.284 ============================ 00:08:15.284 Security Send/Receive: Not Supported 00:08:15.284 Format NVM: Supported 00:08:15.284 Firmware Activate/Download: Not Supported 00:08:15.284 Namespace Management: Supported 00:08:15.284 Device Self-Test: Not Supported 00:08:15.284 Directives: Supported 00:08:15.284 NVMe-MI: Not Supported 00:08:15.284 Virtualization Management: Not Supported 00:08:15.284 Doorbell Buffer Config: Supported 00:08:15.284 Get LBA Status Capability: Not Supported 00:08:15.284 Command & Feature Lockdown Capability: Not Supported 00:08:15.284 Abort Command Limit: 4 00:08:15.284 Async Event Request Limit: 4 00:08:15.284 Number of Firmware Slots: N/A 00:08:15.284 Firmware Slot 1 Read-Only: N/A 00:08:15.284 Firmware Activation Without Reset: N/A 00:08:15.284 Multiple Update Detection Support: N/A 00:08:15.284 Firmware Update Granularity: No Information Provided 00:08:15.284 Per-Namespace SMART Log: Yes 00:08:15.284 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.284 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:15.284 Command Effects Log Page: Supported 00:08:15.284 Get Log Page Extended Data: Supported 00:08:15.284 Telemetry Log Pages: Not Supported 00:08:15.284 Persistent Event Log Pages: Not Supported 00:08:15.284 Supported Log Pages Log Page: May Support 00:08:15.284 Commands Supported & Effects Log Page: Not Supported 00:08:15.284 Feature Identifiers & Effects Log Page:May Support 00:08:15.284 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.284 Data Area 4 for Telemetry Log: Not Supported 00:08:15.284 Error Log Page Entries Supported: 1 00:08:15.284 Keep Alive: Not Supported 00:08:15.284 00:08:15.284 NVM Command Set Attributes 00:08:15.284 ========================== 00:08:15.284 Submission Queue Entry Size 00:08:15.284 Max: 64 00:08:15.284 Min: 64 00:08:15.284 Completion Queue Entry Size 00:08:15.284 Max: 16 00:08:15.284 Min: 16 00:08:15.284 Number of Namespaces: 256 00:08:15.284 Compare Command: Supported 00:08:15.284 Write Uncorrectable Command: Not Supported 00:08:15.284 Dataset Management Command: Supported 00:08:15.284 Write Zeroes Command: Supported 00:08:15.284 Set Features Save Field: Supported 00:08:15.284 Reservations: Not Supported 00:08:15.284 Timestamp: Supported 00:08:15.284 Copy: Supported 00:08:15.284 Volatile Write Cache: Present 00:08:15.284 Atomic Write Unit (Normal): 1 00:08:15.284 Atomic Write Unit (PFail): 1 00:08:15.284 Atomic Compare & Write Unit: 1 00:08:15.284 Fused Compare & Write: Not Supported 00:08:15.284 Scatter-Gather List 00:08:15.284 SGL Command Set: Supported 00:08:15.284 SGL Keyed: Not Supported 00:08:15.284 SGL Bit Bucket Descriptor: Not Supported 00:08:15.284 SGL Metadata Pointer: Not Supported 00:08:15.284 Oversized SGL: Not Supported 00:08:15.284 SGL Metadata Address: Not Supported 00:08:15.284 SGL Offset: Not Supported 00:08:15.284 Transport SGL Data Block: Not Supported 00:08:15.284 Replay Protected Memory Block: Not Supported 00:08:15.284 00:08:15.284 Firmware Slot Information 00:08:15.284 ========================= 00:08:15.284 Active slot: 1 00:08:15.284 Slot 1 Firmware Revision: 1.0 00:08:15.284 00:08:15.284 00:08:15.284 Commands Supported and Effects 00:08:15.284 ============================== 00:08:15.284 Admin Commands 00:08:15.284 -------------- 00:08:15.284 Delete I/O Submission Queue (00h): Supported 00:08:15.284 Create I/O Submission Queue (01h): Supported 00:08:15.284 Get Log Page (02h): Supported 00:08:15.284 Delete I/O Completion Queue (04h): Supported 00:08:15.284 Create I/O Completion Queue (05h): Supported 00:08:15.284 Identify (06h): Supported 00:08:15.284 Abort (08h): Supported 00:08:15.284 Set Features (09h): Supported 00:08:15.284 Get Features (0Ah): Supported 00:08:15.284 Asynchronous Event Request (0Ch): Supported 00:08:15.284 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.284 Directive Send (19h): Supported 00:08:15.284 Directive Receive (1Ah): Supported 00:08:15.284 Virtualization Management (1Ch): Supported 00:08:15.284 Doorbell Buffer Config (7Ch): Supported 00:08:15.284 Format NVM (80h): Supported LBA-Change 00:08:15.284 I/O Commands 00:08:15.284 ------------ 00:08:15.284 Flush (00h): Supported LBA-Change 00:08:15.284 Write (01h): Supported LBA-Change 00:08:15.284 Read (02h): Supported 00:08:15.284 Compare (05h): Supported 00:08:15.284 Write Zeroes (08h): Supported LBA-Change 00:08:15.284 Dataset Management (09h): Supported LBA-Change 00:08:15.284 Unknown (0Ch): Supported 00:08:15.284 Unknown (12h): Supported 00:08:15.284 Copy (19h): Supported LBA-Change 00:08:15.284 Unknown (1Dh): Supported LBA-Change 00:08:15.284 00:08:15.284 Error Log 00:08:15.284 ========= 00:08:15.284 00:08:15.284 Arbitration 00:08:15.284 =========== 00:08:15.284 Arbitration Burst: no limit 00:08:15.284 00:08:15.284 Power Management 00:08:15.284 ================ 00:08:15.284 Number of Power States: 1 00:08:15.284 Current Power State: Power State #0 00:08:15.284 Power State #0: 00:08:15.284 Max Power: 25.00 W 00:08:15.284 Non-Operational State: Operational 00:08:15.284 Entry Latency: 16 microseconds 00:08:15.284 Exit Latency: 4 microseconds 00:08:15.284 Relative Read Throughput: 0 00:08:15.284 Relative Read Latency: 0 00:08:15.284 Relative Write Throughput: 0 00:08:15.284 Relative Write Latency: 0 00:08:15.284 Idle Power: Not Reported 00:08:15.284 Active Power: Not Reported 00:08:15.284 Non-Operational Permissive Mode: Not Supported 00:08:15.284 00:08:15.284 Health Information 00:08:15.284 ================== 00:08:15.284 Critical Warnings: 00:08:15.284 Available Spare Space: OK 00:08:15.284 Temperature: OK 00:08:15.284 Device Reliability: OK 00:08:15.284 Read Only: No 00:08:15.284 Volatile Memory Backup: OK 00:08:15.284 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.284 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.284 Available Spare: 0% 00:08:15.284 Available Spare Threshold: 0% 00:08:15.284 Life Percentage Used: 0% 00:08:15.284 Data Units Read: 1100 00:08:15.284 Data Units Written: 1029 00:08:15.284 Host Read Commands: 39634 00:08:15.284 Host Write Commands: 39057 00:08:15.284 Controller Busy Time: 0 minutes 00:08:15.284 Power Cycles: 0 00:08:15.284 Power On Hours: 0 hours 00:08:15.284 Unsafe Shutdowns: 0 00:08:15.284 Unrecoverable Media Errors: 0 00:08:15.284 Lifetime Error Log Entries: 0 00:08:15.284 Warning Temperature Time: 0 minutes 00:08:15.284 Critical Temperature Time: 0 minutes 00:08:15.284 00:08:15.284 Number of Queues 00:08:15.284 ================ 00:08:15.284 Number of I/O Submission Queues: 64 00:08:15.284 Number of I/O Completion Queues: 64 00:08:15.284 00:08:15.284 ZNS Specific Controller Data 00:08:15.284 ============================ 00:08:15.284 Zone Append Size Limit: 0 00:08:15.284 00:08:15.284 00:08:15.284 Active Namespaces 00:08:15.284 ================= 00:08:15.284 Namespace ID:1 00:08:15.284 Error Recovery Timeout: Unlimited 00:08:15.284 Command Set Identifier: NVM (00h) 00:08:15.284 Deallocate: Supported 00:08:15.284 Deallocated/Unwritten Error: Supported 00:08:15.284 Deallocated Read Value: All 0x00 00:08:15.284 Deallocate in Write Zeroes: Not Supported 00:08:15.284 Deallocated Guard Field: 0xFFFF 00:08:15.284 Flush: Supported 00:08:15.284 Reservation: Not Supported 00:08:15.284 Namespace Sharing Capabilities: Multiple Controllers 00:08:15.285 Size (in LBAs): 262144 (1GiB) 00:08:15.285 Capacity (in LBAs): 262144 (1GiB) 00:08:15.285 Utilization (in LBAs): 262144 (1GiB) 00:08:15.285 Thin Provisioning: Not Supported 00:08:15.285 Per-NS Atomic Units: No 00:08:15.285 Maximum Single Source Range Length: 128 00:08:15.285 Maximum Copy Length: 128 00:08:15.285 Maximum Source Range Count: 128 00:08:15.285 NGUID/EUI64 Never Reused: No 00:08:15.285 Namespace Write Protected: No 00:08:15.285 Endurance group ID: 1 00:08:15.285 Number of LBA Formats: 8 00:08:15.285 Current LBA Format: LBA Format #04 00:08:15.285 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.285 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.285 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.285 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.285 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.285 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.285 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.285 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.285 00:08:15.285 Get Feature FDP: 00:08:15.285 ================ 00:08:15.285 Enabled: Yes 00:08:15.285 FDP configuration index: 0 00:08:15.285 00:08:15.285 FDP configurations log page 00:08:15.285 =========================== 00:08:15.285 Number of FDP configurations: 1 00:08:15.285 Version: 0 00:08:15.285 Size: 112 00:08:15.285 FDP Configuration Descriptor: 0 00:08:15.285 Descriptor Size: 96 00:08:15.285 Reclaim Group Identifier format: 2 00:08:15.285 FDP Volatile Write Cache: Not Present 00:08:15.285 FDP Configuration: Valid 00:08:15.285 Vendor Specific Size: 0 00:08:15.285 Number of Reclaim Groups: 2 00:08:15.285 Number of Recalim Unit Handles: 8 00:08:15.285 Max Placement Identifiers: 128 00:08:15.285 Number of Namespaces Suppprted: 256 00:08:15.285 Reclaim unit Nominal Size: 6000000 bytes 00:08:15.285 Estimated Reclaim Unit Time Limit: Not Reported 00:08:15.285 RUH Desc #000: RUH Type: Initially Isolated 00:08:15.285 RUH Desc #001: RUH Type: Initially Isolated 00:08:15.285 RUH Desc #002: RUH Type: Initially Isolated 00:08:15.285 RUH Desc #003: RUH Type: Initially Isolated 00:08:15.285 RUH Desc #004: RUH Type: Initially Isolated 00:08:15.285 RUH Desc #005: RUH Type: Initially Isolated 00:08:15.285 RUH Desc #006: RUH Type: Initially Isolated 00:08:15.285 RUH Desc #007: RUH Type: Initially Isolated 00:08:15.285 00:08:15.285 FDP reclaim unit handle usage log page 00:08:15.285 ====================================== 00:08:15.285 Number of Reclaim Unit Handles: 8 00:08:15.285 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:15.285 RUH Usage Desc #001: RUH Attributes: Unused 00:08:15.285 RUH Usage Desc #002: RUH Attributes: Unused 00:08:15.285 RUH Usage Desc #003: RUH Attributes: Unused 00:08:15.285 RUH Usage Desc #004: RUH Attributes: Unused 00:08:15.285 RUH Usage Desc #005: RUH Attributes: Unused 00:08:15.285 RUH Usage Desc #006: RUH Attributes: Unused 00:08:15.285 RUH Usage Desc #007: RUH Attributes: Unused 00:08:15.285 00:08:15.285 FDP statistics log page 00:08:15.285 ======================= 00:08:15.285 Host bytes with metadata written: 629673984 00:08:15.285 Med[2024-11-26 22:52:54.380782] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76220 terminated unexpected 00:08:15.285 ia bytes with metadata written: 629829632 00:08:15.285 Media bytes erased: 0 00:08:15.285 00:08:15.285 FDP events log page 00:08:15.285 =================== 00:08:15.285 Number of FDP events: 0 00:08:15.285 00:08:15.285 NVM Specific Namespace Data 00:08:15.285 =========================== 00:08:15.285 Logical Block Storage Tag Mask: 0 00:08:15.285 Protection Information Capabilities: 00:08:15.285 16b Guard Protection Information Storage Tag Support: No 00:08:15.285 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.285 Storage Tag Check Read Support: No 00:08:15.285 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.285 ===================================================== 00:08:15.285 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.285 ===================================================== 00:08:15.285 Controller Capabilities/Features 00:08:15.285 ================================ 00:08:15.285 Vendor ID: 1b36 00:08:15.285 Subsystem Vendor ID: 1af4 00:08:15.285 Serial Number: 12342 00:08:15.285 Model Number: QEMU NVMe Ctrl 00:08:15.285 Firmware Version: 8.0.0 00:08:15.285 Recommended Arb Burst: 6 00:08:15.285 IEEE OUI Identifier: 00 54 52 00:08:15.285 Multi-path I/O 00:08:15.285 May have multiple subsystem ports: No 00:08:15.285 May have multiple controllers: No 00:08:15.285 Associated with SR-IOV VF: No 00:08:15.285 Max Data Transfer Size: 524288 00:08:15.285 Max Number of Namespaces: 256 00:08:15.285 Max Number of I/O Queues: 64 00:08:15.285 NVMe Specification Version (VS): 1.4 00:08:15.285 NVMe Specification Version (Identify): 1.4 00:08:15.285 Maximum Queue Entries: 2048 00:08:15.285 Contiguous Queues Required: Yes 00:08:15.285 Arbitration Mechanisms Supported 00:08:15.285 Weighted Round Robin: Not Supported 00:08:15.285 Vendor Specific: Not Supported 00:08:15.285 Reset Timeout: 7500 ms 00:08:15.285 Doorbell Stride: 4 bytes 00:08:15.285 NVM Subsystem Reset: Not Supported 00:08:15.285 Command Sets Supported 00:08:15.285 NVM Command Set: Supported 00:08:15.285 Boot Partition: Not Supported 00:08:15.285 Memory Page Size Minimum: 4096 bytes 00:08:15.285 Memory Page Size Maximum: 65536 bytes 00:08:15.285 Persistent Memory Region: Not Supported 00:08:15.285 Optional Asynchronous Events Supported 00:08:15.285 Namespace Attribute Notices: Supported 00:08:15.285 Firmware Activation Notices: Not Supported 00:08:15.285 ANA Change Notices: Not Supported 00:08:15.285 PLE Aggregate Log Change Notices: Not Supported 00:08:15.285 LBA Status Info Alert Notices: Not Supported 00:08:15.285 EGE Aggregate Log Change Notices: Not Supported 00:08:15.285 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.285 Zone Descriptor Change Notices: Not Supported 00:08:15.285 Discovery Log Change Notices: Not Supported 00:08:15.285 Controller Attributes 00:08:15.285 128-bit Host Identifier: Not Supported 00:08:15.285 Non-Operational Permissive Mode: Not Supported 00:08:15.285 NVM Sets: Not Supported 00:08:15.285 Read Recovery Levels: Not Supported 00:08:15.285 Endurance Groups: Not Supported 00:08:15.285 Predictable Latency Mode: Not Supported 00:08:15.285 Traffic Based Keep ALive: Not Supported 00:08:15.285 Namespace Granularity: Not Supported 00:08:15.285 SQ Associations: Not Supported 00:08:15.285 UUID List: Not Supported 00:08:15.285 Multi-Domain Subsystem: Not Supported 00:08:15.285 Fixed Capacity Management: Not Supported 00:08:15.285 Variable Capacity Management: Not Supported 00:08:15.285 Delete Endurance Group: Not Supported 00:08:15.285 Delete NVM Set: Not Supported 00:08:15.285 Extended LBA Formats Supported: Supported 00:08:15.285 Flexible Data Placement Supported: Not Supported 00:08:15.285 00:08:15.285 Controller Memory Buffer Support 00:08:15.285 ================================ 00:08:15.285 Supported: No 00:08:15.285 00:08:15.286 Persistent Memory Region Support 00:08:15.286 ================================ 00:08:15.286 Supported: No 00:08:15.286 00:08:15.286 Admin Command Set Attributes 00:08:15.286 ============================ 00:08:15.286 Security Send/Receive: Not Supported 00:08:15.286 Format NVM: Supported 00:08:15.286 Firmware Activate/Download: Not Supported 00:08:15.286 Namespace Management: Supported 00:08:15.286 Device Self-Test: Not Supported 00:08:15.286 Directives: Supported 00:08:15.286 NVMe-MI: Not Supported 00:08:15.286 Virtualization Management: Not Supported 00:08:15.286 Doorbell Buffer Config: Supported 00:08:15.286 Get LBA Status Capability: Not Supported 00:08:15.286 Command & Feature Lockdown Capability: Not Supported 00:08:15.286 Abort Command Limit: 4 00:08:15.286 Async Event Request Limit: 4 00:08:15.286 Number of Firmware Slots: N/A 00:08:15.286 Firmware Slot 1 Read-Only: N/A 00:08:15.286 Firmware Activation Without Reset: N/A 00:08:15.286 Multiple Update Detection Support: N/A 00:08:15.286 Firmware Update Granularity: No Information Provided 00:08:15.286 Per-Namespace SMART Log: Yes 00:08:15.286 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.286 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:15.286 Command Effects Log Page: Supported 00:08:15.286 Get Log Page Extended Data: Supported 00:08:15.286 Telemetry Log Pages: Not Supported 00:08:15.286 Persistent Event Log Pages: Not Supported 00:08:15.286 Supported Log Pages Log Page: May Support 00:08:15.286 Commands Supported & Effects Log Page: Not Supported 00:08:15.286 Feature Identifiers & Effects Log Page:May Support 00:08:15.286 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.286 Data Area 4 for Telemetry Log: Not Supported 00:08:15.286 Error Log Page Entries Supported: 1 00:08:15.286 Keep Alive: Not Supported 00:08:15.286 00:08:15.286 NVM Command Set Attributes 00:08:15.286 ========================== 00:08:15.286 Submission Queue Entry Size 00:08:15.286 Max: 64 00:08:15.286 Min: 64 00:08:15.286 Completion Queue Entry Size 00:08:15.286 Max: 16 00:08:15.286 Min: 16 00:08:15.286 Number of Namespaces: 256 00:08:15.286 Compare Command: Supported 00:08:15.286 Write Uncorrectable Command: Not Supported 00:08:15.286 Dataset Management Command: Supported 00:08:15.286 Write Zeroes Command: Supported 00:08:15.286 Set Features Save Field: Supported 00:08:15.286 Reservations: Not Supported 00:08:15.286 Timestamp: Supported 00:08:15.286 Copy: Supported 00:08:15.286 Volatile Write Cache: Present 00:08:15.286 Atomic Write Unit (Normal): 1 00:08:15.286 Atomic Write Unit (PFail): 1 00:08:15.286 Atomic Compare & Write Unit: 1 00:08:15.286 Fused Compare & Write: Not Supported 00:08:15.286 Scatter-Gather List 00:08:15.286 SGL Command Set: Supported 00:08:15.286 SGL Keyed: Not Supported 00:08:15.286 SGL Bit Bucket Descriptor: Not Supported 00:08:15.286 SGL Metadata Pointer: Not Supported 00:08:15.286 Oversized SGL: Not Supported 00:08:15.286 SGL Metadata Address: Not Supported 00:08:15.286 SGL Offset: Not Supported 00:08:15.286 Transport SGL Data Block: Not Supported 00:08:15.286 Replay Protected Memory Block: Not Supported 00:08:15.286 00:08:15.286 Firmware Slot Information 00:08:15.286 ========================= 00:08:15.286 Active slot: 1 00:08:15.286 Slot 1 Firmware Revision: 1.0 00:08:15.286 00:08:15.286 00:08:15.286 Commands Supported and Effects 00:08:15.286 ============================== 00:08:15.286 Admin Commands 00:08:15.286 -------------- 00:08:15.286 Delete I/O Submission Queue (00h): Supported 00:08:15.286 Create I/O Submission Queue (01h): Supported 00:08:15.286 Get Log Page (02h): Supported 00:08:15.286 Delete I/O Completion Queue (04h): Supported 00:08:15.286 Create I/O Completion Queue (05h): Supported 00:08:15.286 Identify (06h): Supported 00:08:15.286 Abort (08h): Supported 00:08:15.286 Set Features (09h): Supported 00:08:15.286 Get Features (0Ah): Supported 00:08:15.286 Asynchronous Event Request (0Ch): Supported 00:08:15.286 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.286 Directive Send (19h): Supported 00:08:15.286 Directive Receive (1Ah): Supported 00:08:15.286 Virtualization Management (1Ch): Supported 00:08:15.286 Doorbell Buffer Config (7Ch): Supported 00:08:15.286 Format NVM (80h): Supported LBA-Change 00:08:15.286 I/O Commands 00:08:15.286 ------------ 00:08:15.286 Flush (00h): Supported LBA-Change 00:08:15.286 Write (01h): Supported LBA-Change 00:08:15.286 Read (02h): Supported 00:08:15.286 Compare (05h): Supported 00:08:15.286 Write Zeroes (08h): Supported LBA-Change 00:08:15.286 Dataset Management (09h): Supported LBA-Change 00:08:15.286 Unknown (0Ch): Supported 00:08:15.286 Unknown (12h): Supported 00:08:15.286 Copy (19h): Supported LBA-Change 00:08:15.286 Unknown (1Dh): Supported LBA-Change 00:08:15.286 00:08:15.286 Error Log 00:08:15.286 ========= 00:08:15.286 00:08:15.286 Arbitration 00:08:15.286 =========== 00:08:15.286 Arbitration Burst: no limit 00:08:15.286 00:08:15.286 Power Management 00:08:15.286 ================ 00:08:15.286 Number of Power States: 1 00:08:15.286 Current Power State: Power State #0 00:08:15.286 Power State #0: 00:08:15.286 Max Power: 25.00 W 00:08:15.286 Non-Operational State: Operational 00:08:15.286 Entry Latency: 16 microseconds 00:08:15.286 Exit Latency: 4 microseconds 00:08:15.286 Relative Read Throughput: 0 00:08:15.286 Relative Read Latency: 0 00:08:15.286 Relative Write Throughput: 0 00:08:15.286 Relative Write Latency: 0 00:08:15.286 Idle Power: Not Reported 00:08:15.286 Active Power: Not Reported 00:08:15.286 Non-Operational Permissive Mode: Not Supported 00:08:15.286 00:08:15.286 Health Information 00:08:15.286 ================== 00:08:15.286 Critical Warnings: 00:08:15.286 Available Spare Space: OK 00:08:15.286 Temperature: OK 00:08:15.286 Device Reliability: OK 00:08:15.286 Read Only: No 00:08:15.286 Volatile Memory Backup: OK 00:08:15.286 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.286 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.286 Available Spare: 0% 00:08:15.286 Available Spare Threshold: 0% 00:08:15.286 Life Percentage Used: 0% 00:08:15.286 Data Units Read: 2224 00:08:15.286 Data Units Written: 2011 00:08:15.286 Host Read Commands: 110192 00:08:15.286 Host Write Commands: 108461 00:08:15.286 Controller Busy Time: 0 minutes 00:08:15.286 Power Cycles: 0 00:08:15.286 Power On Hours: 0 hours 00:08:15.286 Unsafe Shutdowns: 0 00:08:15.286 Unrecoverable Media Errors: 0 00:08:15.286 Lifetime Error Log Entries: 0 00:08:15.286 Warning Temperature Time: 0 minutes 00:08:15.286 Critical Temperature Time: 0 minutes 00:08:15.286 00:08:15.286 Number of Queues 00:08:15.286 ================ 00:08:15.286 Number of I/O Submission Queues: 64 00:08:15.286 Number of I/O Completion Queues: 64 00:08:15.286 00:08:15.286 ZNS Specific Controller Data 00:08:15.286 ============================ 00:08:15.286 Zone Append Size Limit: 0 00:08:15.286 00:08:15.286 00:08:15.286 Active Namespaces 00:08:15.286 ================= 00:08:15.286 Namespace ID:1 00:08:15.286 Error Recovery Timeout: Unlimited 00:08:15.286 Command Set Identifier: NVM (00h) 00:08:15.286 Deallocate: Supported 00:08:15.286 Deallocated/Unwritten Error: Supported 00:08:15.286 Deallocated Read Value: All 0x00 00:08:15.286 Deallocate in Write Zeroes: Not Supported 00:08:15.286 Deallocated Guard Field: 0xFFFF 00:08:15.286 Flush: Supported 00:08:15.286 Reservation: Not Supported 00:08:15.286 Namespace Sharing Capabilities: Private 00:08:15.286 Size (in LBAs): 1048576 (4GiB) 00:08:15.286 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.286 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.286 Thin Provisioning: Not Supported 00:08:15.286 Per-NS Atomic Units: No 00:08:15.286 Maximum Single Source Range Length: 128 00:08:15.286 Maximum Copy Length: 128 00:08:15.286 Maximum Source Range Count: 128 00:08:15.286 NGUID/EUI64 Never Reused: No 00:08:15.286 Namespace Write Protected: No 00:08:15.286 Number of LBA Formats: 8 00:08:15.286 Current LBA Format: LBA Format #04 00:08:15.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.286 00:08:15.286 NVM Specific Namespace Data 00:08:15.286 =========================== 00:08:15.286 Logical Block Storage Tag Mask: 0 00:08:15.286 Protection Information Capabilities: 00:08:15.286 16b Guard Protection Information Storage Tag Support: No 00:08:15.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.287 Storage Tag Check Read Support: No 00:08:15.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Namespace ID:2 00:08:15.287 Error Recovery Timeout: Unlimited 00:08:15.287 Command Set Identifier: NVM (00h) 00:08:15.287 Deallocate: Supported 00:08:15.287 Deallocated/Unwritten Error: Supported 00:08:15.287 Deallocated Read Value: All 0x00 00:08:15.287 Deallocate in Write Zeroes: Not Supported 00:08:15.287 Deallocated Guard Field: 0xFFFF 00:08:15.287 Flush: Supported 00:08:15.287 Reservation: Not Supported 00:08:15.287 Namespace Sharing Capabilities: Private 00:08:15.287 Size (in LBAs): 1048576 (4GiB) 00:08:15.287 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.287 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.287 Thin Provisioning: Not Supported 00:08:15.287 Per-NS Atomic Units: No 00:08:15.287 Maximum Single Source Range Length: 128 00:08:15.287 Maximum Copy Length: 128 00:08:15.287 Maximum Source Range Count: 128 00:08:15.287 NGUID/EUI64 Never Reused: No 00:08:15.287 Namespace Write Protected: No 00:08:15.287 Number of LBA Formats: 8 00:08:15.287 Current LBA Format: LBA Format #04 00:08:15.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.287 00:08:15.287 NVM Specific Namespace Data 00:08:15.287 =========================== 00:08:15.287 Logical Block Storage Tag Mask: 0 00:08:15.287 Protection Information Capabilities: 00:08:15.287 16b Guard Protection Information Storage Tag Support: No 00:08:15.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.287 Storage Tag Check Read Support: No 00:08:15.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.287 Namespace ID:3 00:08:15.287 Error Recovery Timeout: Unlimited 00:08:15.287 Command Set Identifier: NVM (00h) 00:08:15.287 Deallocate: Supported 00:08:15.287 Deallocated/Unwritten Error: Supported 00:08:15.287 Deallocated Read Value: All 0x00 00:08:15.287 Deallocate in Write Zeroes: Not Supported 00:08:15.287 Deallocated Guard Field: 0xFFFF 00:08:15.287 Flush: Supported 00:08:15.287 Reservation: Not Supported 00:08:15.287 Namespace Sharing Capabilities: Private 00:08:15.287 Size (in LBAs): 1048576 (4GiB) 00:08:15.595 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.595 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.595 Thin Provisioning: Not Supported 00:08:15.595 Per-NS Atomic Units: No 00:08:15.595 Maximum Single Source Range Length: 128 00:08:15.595 Maximum Copy Length: 128 00:08:15.595 Maximum Source Range Count: 128 00:08:15.595 NGUID/EUI64 Never Reused: No 00:08:15.595 Namespace Write Protected: No 00:08:15.595 Number of LBA Formats: 8 00:08:15.595 Current LBA Format: LBA Format #04 00:08:15.595 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.595 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.595 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.595 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.595 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.595 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.595 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.595 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.595 00:08:15.595 NVM Specific Namespace Data 00:08:15.595 =========================== 00:08:15.595 Logical Block Storage Tag Mask: 0 00:08:15.595 Protection Information Capabilities: 00:08:15.595 16b Guard Protection Information Storage Tag Support: No 00:08:15.595 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.595 Storage Tag Check Read Support: No 00:08:15.595 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.596 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.596 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:15.596 ===================================================== 00:08:15.596 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.596 ===================================================== 00:08:15.596 Controller Capabilities/Features 00:08:15.596 ================================ 00:08:15.596 Vendor ID: 1b36 00:08:15.596 Subsystem Vendor ID: 1af4 00:08:15.596 Serial Number: 12340 00:08:15.596 Model Number: QEMU NVMe Ctrl 00:08:15.596 Firmware Version: 8.0.0 00:08:15.596 Recommended Arb Burst: 6 00:08:15.596 IEEE OUI Identifier: 00 54 52 00:08:15.596 Multi-path I/O 00:08:15.596 May have multiple subsystem ports: No 00:08:15.596 May have multiple controllers: No 00:08:15.596 Associated with SR-IOV VF: No 00:08:15.596 Max Data Transfer Size: 524288 00:08:15.596 Max Number of Namespaces: 256 00:08:15.596 Max Number of I/O Queues: 64 00:08:15.596 NVMe Specification Version (VS): 1.4 00:08:15.596 NVMe Specification Version (Identify): 1.4 00:08:15.596 Maximum Queue Entries: 2048 00:08:15.596 Contiguous Queues Required: Yes 00:08:15.596 Arbitration Mechanisms Supported 00:08:15.596 Weighted Round Robin: Not Supported 00:08:15.596 Vendor Specific: Not Supported 00:08:15.596 Reset Timeout: 7500 ms 00:08:15.596 Doorbell Stride: 4 bytes 00:08:15.596 NVM Subsystem Reset: Not Supported 00:08:15.596 Command Sets Supported 00:08:15.596 NVM Command Set: Supported 00:08:15.596 Boot Partition: Not Supported 00:08:15.596 Memory Page Size Minimum: 4096 bytes 00:08:15.596 Memory Page Size Maximum: 65536 bytes 00:08:15.596 Persistent Memory Region: Not Supported 00:08:15.596 Optional Asynchronous Events Supported 00:08:15.596 Namespace Attribute Notices: Supported 00:08:15.596 Firmware Activation Notices: Not Supported 00:08:15.596 ANA Change Notices: Not Supported 00:08:15.596 PLE Aggregate Log Change Notices: Not Supported 00:08:15.596 LBA Status Info Alert Notices: Not Supported 00:08:15.596 EGE Aggregate Log Change Notices: Not Supported 00:08:15.596 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.596 Zone Descriptor Change Notices: Not Supported 00:08:15.596 Discovery Log Change Notices: Not Supported 00:08:15.596 Controller Attributes 00:08:15.596 128-bit Host Identifier: Not Supported 00:08:15.596 Non-Operational Permissive Mode: Not Supported 00:08:15.596 NVM Sets: Not Supported 00:08:15.596 Read Recovery Levels: Not Supported 00:08:15.596 Endurance Groups: Not Supported 00:08:15.596 Predictable Latency Mode: Not Supported 00:08:15.596 Traffic Based Keep ALive: Not Supported 00:08:15.596 Namespace Granularity: Not Supported 00:08:15.596 SQ Associations: Not Supported 00:08:15.596 UUID List: Not Supported 00:08:15.596 Multi-Domain Subsystem: Not Supported 00:08:15.596 Fixed Capacity Management: Not Supported 00:08:15.596 Variable Capacity Management: Not Supported 00:08:15.596 Delete Endurance Group: Not Supported 00:08:15.596 Delete NVM Set: Not Supported 00:08:15.596 Extended LBA Formats Supported: Supported 00:08:15.596 Flexible Data Placement Supported: Not Supported 00:08:15.596 00:08:15.596 Controller Memory Buffer Support 00:08:15.596 ================================ 00:08:15.596 Supported: No 00:08:15.596 00:08:15.596 Persistent Memory Region Support 00:08:15.596 ================================ 00:08:15.596 Supported: No 00:08:15.596 00:08:15.596 Admin Command Set Attributes 00:08:15.596 ============================ 00:08:15.596 Security Send/Receive: Not Supported 00:08:15.596 Format NVM: Supported 00:08:15.596 Firmware Activate/Download: Not Supported 00:08:15.596 Namespace Management: Supported 00:08:15.596 Device Self-Test: Not Supported 00:08:15.596 Directives: Supported 00:08:15.596 NVMe-MI: Not Supported 00:08:15.596 Virtualization Management: Not Supported 00:08:15.596 Doorbell Buffer Config: Supported 00:08:15.596 Get LBA Status Capability: Not Supported 00:08:15.596 Command & Feature Lockdown Capability: Not Supported 00:08:15.596 Abort Command Limit: 4 00:08:15.596 Async Event Request Limit: 4 00:08:15.596 Number of Firmware Slots: N/A 00:08:15.596 Firmware Slot 1 Read-Only: N/A 00:08:15.596 Firmware Activation Without Reset: N/A 00:08:15.596 Multiple Update Detection Support: N/A 00:08:15.596 Firmware Update Granularity: No Information Provided 00:08:15.596 Per-Namespace SMART Log: Yes 00:08:15.596 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.596 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:15.596 Command Effects Log Page: Supported 00:08:15.596 Get Log Page Extended Data: Supported 00:08:15.596 Telemetry Log Pages: Not Supported 00:08:15.596 Persistent Event Log Pages: Not Supported 00:08:15.596 Supported Log Pages Log Page: May Support 00:08:15.596 Commands Supported & Effects Log Page: Not Supported 00:08:15.596 Feature Identifiers & Effects Log Page:May Support 00:08:15.596 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.596 Data Area 4 for Telemetry Log: Not Supported 00:08:15.596 Error Log Page Entries Supported: 1 00:08:15.596 Keep Alive: Not Supported 00:08:15.596 00:08:15.596 NVM Command Set Attributes 00:08:15.596 ========================== 00:08:15.596 Submission Queue Entry Size 00:08:15.597 Max: 64 00:08:15.597 Min: 64 00:08:15.597 Completion Queue Entry Size 00:08:15.597 Max: 16 00:08:15.597 Min: 16 00:08:15.597 Number of Namespaces: 256 00:08:15.597 Compare Command: Supported 00:08:15.597 Write Uncorrectable Command: Not Supported 00:08:15.597 Dataset Management Command: Supported 00:08:15.597 Write Zeroes Command: Supported 00:08:15.597 Set Features Save Field: Supported 00:08:15.597 Reservations: Not Supported 00:08:15.597 Timestamp: Supported 00:08:15.597 Copy: Supported 00:08:15.597 Volatile Write Cache: Present 00:08:15.597 Atomic Write Unit (Normal): 1 00:08:15.597 Atomic Write Unit (PFail): 1 00:08:15.597 Atomic Compare & Write Unit: 1 00:08:15.597 Fused Compare & Write: Not Supported 00:08:15.597 Scatter-Gather List 00:08:15.597 SGL Command Set: Supported 00:08:15.597 SGL Keyed: Not Supported 00:08:15.597 SGL Bit Bucket Descriptor: Not Supported 00:08:15.597 SGL Metadata Pointer: Not Supported 00:08:15.597 Oversized SGL: Not Supported 00:08:15.597 SGL Metadata Address: Not Supported 00:08:15.597 SGL Offset: Not Supported 00:08:15.597 Transport SGL Data Block: Not Supported 00:08:15.597 Replay Protected Memory Block: Not Supported 00:08:15.597 00:08:15.597 Firmware Slot Information 00:08:15.597 ========================= 00:08:15.597 Active slot: 1 00:08:15.597 Slot 1 Firmware Revision: 1.0 00:08:15.597 00:08:15.597 00:08:15.597 Commands Supported and Effects 00:08:15.597 ============================== 00:08:15.597 Admin Commands 00:08:15.597 -------------- 00:08:15.597 Delete I/O Submission Queue (00h): Supported 00:08:15.597 Create I/O Submission Queue (01h): Supported 00:08:15.597 Get Log Page (02h): Supported 00:08:15.597 Delete I/O Completion Queue (04h): Supported 00:08:15.597 Create I/O Completion Queue (05h): Supported 00:08:15.597 Identify (06h): Supported 00:08:15.597 Abort (08h): Supported 00:08:15.597 Set Features (09h): Supported 00:08:15.597 Get Features (0Ah): Supported 00:08:15.597 Asynchronous Event Request (0Ch): Supported 00:08:15.597 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.597 Directive Send (19h): Supported 00:08:15.597 Directive Receive (1Ah): Supported 00:08:15.597 Virtualization Management (1Ch): Supported 00:08:15.597 Doorbell Buffer Config (7Ch): Supported 00:08:15.597 Format NVM (80h): Supported LBA-Change 00:08:15.597 I/O Commands 00:08:15.597 ------------ 00:08:15.597 Flush (00h): Supported LBA-Change 00:08:15.597 Write (01h): Supported LBA-Change 00:08:15.597 Read (02h): Supported 00:08:15.597 Compare (05h): Supported 00:08:15.597 Write Zeroes (08h): Supported LBA-Change 00:08:15.597 Dataset Management (09h): Supported LBA-Change 00:08:15.597 Unknown (0Ch): Supported 00:08:15.597 Unknown (12h): Supported 00:08:15.597 Copy (19h): Supported LBA-Change 00:08:15.597 Unknown (1Dh): Supported LBA-Change 00:08:15.597 00:08:15.597 Error Log 00:08:15.597 ========= 00:08:15.597 00:08:15.597 Arbitration 00:08:15.597 =========== 00:08:15.597 Arbitration Burst: no limit 00:08:15.597 00:08:15.597 Power Management 00:08:15.597 ================ 00:08:15.597 Number of Power States: 1 00:08:15.597 Current Power State: Power State #0 00:08:15.597 Power State #0: 00:08:15.597 Max Power: 25.00 W 00:08:15.597 Non-Operational State: Operational 00:08:15.597 Entry Latency: 16 microseconds 00:08:15.597 Exit Latency: 4 microseconds 00:08:15.597 Relative Read Throughput: 0 00:08:15.597 Relative Read Latency: 0 00:08:15.597 Relative Write Throughput: 0 00:08:15.597 Relative Write Latency: 0 00:08:15.597 Idle Power: Not Reported 00:08:15.597 Active Power: Not Reported 00:08:15.597 Non-Operational Permissive Mode: Not Supported 00:08:15.597 00:08:15.597 Health Information 00:08:15.597 ================== 00:08:15.597 Critical Warnings: 00:08:15.597 Available Spare Space: OK 00:08:15.597 Temperature: OK 00:08:15.597 Device Reliability: OK 00:08:15.597 Read Only: No 00:08:15.597 Volatile Memory Backup: OK 00:08:15.597 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.597 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.597 Available Spare: 0% 00:08:15.597 Available Spare Threshold: 0% 00:08:15.597 Life Percentage Used: 0% 00:08:15.597 Data Units Read: 619 00:08:15.597 Data Units Written: 547 00:08:15.597 Host Read Commands: 35574 00:08:15.597 Host Write Commands: 35360 00:08:15.597 Controller Busy Time: 0 minutes 00:08:15.597 Power Cycles: 0 00:08:15.597 Power On Hours: 0 hours 00:08:15.597 Unsafe Shutdowns: 0 00:08:15.597 Unrecoverable Media Errors: 0 00:08:15.597 Lifetime Error Log Entries: 0 00:08:15.597 Warning Temperature Time: 0 minutes 00:08:15.597 Critical Temperature Time: 0 minutes 00:08:15.597 00:08:15.597 Number of Queues 00:08:15.597 ================ 00:08:15.597 Number of I/O Submission Queues: 64 00:08:15.597 Number of I/O Completion Queues: 64 00:08:15.597 00:08:15.597 ZNS Specific Controller Data 00:08:15.597 ============================ 00:08:15.597 Zone Append Size Limit: 0 00:08:15.597 00:08:15.597 00:08:15.597 Active Namespaces 00:08:15.597 ================= 00:08:15.597 Namespace ID:1 00:08:15.597 Error Recovery Timeout: Unlimited 00:08:15.597 Command Set Identifier: NVM (00h) 00:08:15.597 Deallocate: Supported 00:08:15.597 Deallocated/Unwritten Error: Supported 00:08:15.597 Deallocated Read Value: All 0x00 00:08:15.598 Deallocate in Write Zeroes: Not Supported 00:08:15.598 Deallocated Guard Field: 0xFFFF 00:08:15.598 Flush: Supported 00:08:15.598 Reservation: Not Supported 00:08:15.598 Metadata Transferred as: Separate Metadata Buffer 00:08:15.598 Namespace Sharing Capabilities: Private 00:08:15.598 Size (in LBAs): 1548666 (5GiB) 00:08:15.598 Capacity (in LBAs): 1548666 (5GiB) 00:08:15.598 Utilization (in LBAs): 1548666 (5GiB) 00:08:15.598 Thin Provisioning: Not Supported 00:08:15.598 Per-NS Atomic Units: No 00:08:15.598 Maximum Single Source Range Length: 128 00:08:15.598 Maximum Copy Length: 128 00:08:15.598 Maximum Source Range Count: 128 00:08:15.598 NGUID/EUI64 Never Reused: No 00:08:15.598 Namespace Write Protected: No 00:08:15.598 Number of LBA Formats: 8 00:08:15.598 Current LBA Format: LBA Format #07 00:08:15.598 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.598 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.598 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.598 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.598 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.598 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.598 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.598 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.598 00:08:15.598 NVM Specific Namespace Data 00:08:15.598 =========================== 00:08:15.598 Logical Block Storage Tag Mask: 0 00:08:15.598 Protection Information Capabilities: 00:08:15.598 16b Guard Protection Information Storage Tag Support: No 00:08:15.598 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.598 Storage Tag Check Read Support: No 00:08:15.598 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.598 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.598 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:15.886 ===================================================== 00:08:15.886 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.886 ===================================================== 00:08:15.886 Controller Capabilities/Features 00:08:15.886 ================================ 00:08:15.886 Vendor ID: 1b36 00:08:15.886 Subsystem Vendor ID: 1af4 00:08:15.886 Serial Number: 12341 00:08:15.886 Model Number: QEMU NVMe Ctrl 00:08:15.886 Firmware Version: 8.0.0 00:08:15.886 Recommended Arb Burst: 6 00:08:15.886 IEEE OUI Identifier: 00 54 52 00:08:15.886 Multi-path I/O 00:08:15.886 May have multiple subsystem ports: No 00:08:15.886 May have multiple controllers: No 00:08:15.886 Associated with SR-IOV VF: No 00:08:15.886 Max Data Transfer Size: 524288 00:08:15.886 Max Number of Namespaces: 256 00:08:15.886 Max Number of I/O Queues: 64 00:08:15.886 NVMe Specification Version (VS): 1.4 00:08:15.886 NVMe Specification Version (Identify): 1.4 00:08:15.886 Maximum Queue Entries: 2048 00:08:15.886 Contiguous Queues Required: Yes 00:08:15.886 Arbitration Mechanisms Supported 00:08:15.886 Weighted Round Robin: Not Supported 00:08:15.886 Vendor Specific: Not Supported 00:08:15.886 Reset Timeout: 7500 ms 00:08:15.886 Doorbell Stride: 4 bytes 00:08:15.886 NVM Subsystem Reset: Not Supported 00:08:15.886 Command Sets Supported 00:08:15.886 NVM Command Set: Supported 00:08:15.886 Boot Partition: Not Supported 00:08:15.886 Memory Page Size Minimum: 4096 bytes 00:08:15.886 Memory Page Size Maximum: 65536 bytes 00:08:15.886 Persistent Memory Region: Not Supported 00:08:15.886 Optional Asynchronous Events Supported 00:08:15.886 Namespace Attribute Notices: Supported 00:08:15.886 Firmware Activation Notices: Not Supported 00:08:15.886 ANA Change Notices: Not Supported 00:08:15.887 PLE Aggregate Log Change Notices: Not Supported 00:08:15.887 LBA Status Info Alert Notices: Not Supported 00:08:15.887 EGE Aggregate Log Change Notices: Not Supported 00:08:15.887 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.887 Zone Descriptor Change Notices: Not Supported 00:08:15.887 Discovery Log Change Notices: Not Supported 00:08:15.887 Controller Attributes 00:08:15.887 128-bit Host Identifier: Not Supported 00:08:15.887 Non-Operational Permissive Mode: Not Supported 00:08:15.887 NVM Sets: Not Supported 00:08:15.887 Read Recovery Levels: Not Supported 00:08:15.887 Endurance Groups: Not Supported 00:08:15.887 Predictable Latency Mode: Not Supported 00:08:15.887 Traffic Based Keep ALive: Not Supported 00:08:15.887 Namespace Granularity: Not Supported 00:08:15.887 SQ Associations: Not Supported 00:08:15.887 UUID List: Not Supported 00:08:15.887 Multi-Domain Subsystem: Not Supported 00:08:15.887 Fixed Capacity Management: Not Supported 00:08:15.887 Variable Capacity Management: Not Supported 00:08:15.887 Delete Endurance Group: Not Supported 00:08:15.887 Delete NVM Set: Not Supported 00:08:15.887 Extended LBA Formats Supported: Supported 00:08:15.887 Flexible Data Placement Supported: Not Supported 00:08:15.887 00:08:15.887 Controller Memory Buffer Support 00:08:15.887 ================================ 00:08:15.887 Supported: No 00:08:15.887 00:08:15.887 Persistent Memory Region Support 00:08:15.887 ================================ 00:08:15.887 Supported: No 00:08:15.887 00:08:15.887 Admin Command Set Attributes 00:08:15.887 ============================ 00:08:15.887 Security Send/Receive: Not Supported 00:08:15.887 Format NVM: Supported 00:08:15.887 Firmware Activate/Download: Not Supported 00:08:15.887 Namespace Management: Supported 00:08:15.887 Device Self-Test: Not Supported 00:08:15.887 Directives: Supported 00:08:15.887 NVMe-MI: Not Supported 00:08:15.887 Virtualization Management: Not Supported 00:08:15.887 Doorbell Buffer Config: Supported 00:08:15.887 Get LBA Status Capability: Not Supported 00:08:15.887 Command & Feature Lockdown Capability: Not Supported 00:08:15.887 Abort Command Limit: 4 00:08:15.887 Async Event Request Limit: 4 00:08:15.887 Number of Firmware Slots: N/A 00:08:15.887 Firmware Slot 1 Read-Only: N/A 00:08:15.887 Firmware Activation Without Reset: N/A 00:08:15.887 Multiple Update Detection Support: N/A 00:08:15.887 Firmware Update Granularity: No Information Provided 00:08:15.887 Per-Namespace SMART Log: Yes 00:08:15.887 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.887 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:15.887 Command Effects Log Page: Supported 00:08:15.887 Get Log Page Extended Data: Supported 00:08:15.887 Telemetry Log Pages: Not Supported 00:08:15.887 Persistent Event Log Pages: Not Supported 00:08:15.887 Supported Log Pages Log Page: May Support 00:08:15.887 Commands Supported & Effects Log Page: Not Supported 00:08:15.887 Feature Identifiers & Effects Log Page:May Support 00:08:15.887 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.887 Data Area 4 for Telemetry Log: Not Supported 00:08:15.887 Error Log Page Entries Supported: 1 00:08:15.887 Keep Alive: Not Supported 00:08:15.887 00:08:15.887 NVM Command Set Attributes 00:08:15.887 ========================== 00:08:15.887 Submission Queue Entry Size 00:08:15.887 Max: 64 00:08:15.887 Min: 64 00:08:15.887 Completion Queue Entry Size 00:08:15.887 Max: 16 00:08:15.887 Min: 16 00:08:15.887 Number of Namespaces: 256 00:08:15.887 Compare Command: Supported 00:08:15.887 Write Uncorrectable Command: Not Supported 00:08:15.887 Dataset Management Command: Supported 00:08:15.887 Write Zeroes Command: Supported 00:08:15.887 Set Features Save Field: Supported 00:08:15.887 Reservations: Not Supported 00:08:15.887 Timestamp: Supported 00:08:15.887 Copy: Supported 00:08:15.887 Volatile Write Cache: Present 00:08:15.887 Atomic Write Unit (Normal): 1 00:08:15.887 Atomic Write Unit (PFail): 1 00:08:15.887 Atomic Compare & Write Unit: 1 00:08:15.887 Fused Compare & Write: Not Supported 00:08:15.887 Scatter-Gather List 00:08:15.887 SGL Command Set: Supported 00:08:15.887 SGL Keyed: Not Supported 00:08:15.887 SGL Bit Bucket Descriptor: Not Supported 00:08:15.887 SGL Metadata Pointer: Not Supported 00:08:15.887 Oversized SGL: Not Supported 00:08:15.887 SGL Metadata Address: Not Supported 00:08:15.887 SGL Offset: Not Supported 00:08:15.887 Transport SGL Data Block: Not Supported 00:08:15.887 Replay Protected Memory Block: Not Supported 00:08:15.887 00:08:15.887 Firmware Slot Information 00:08:15.887 ========================= 00:08:15.887 Active slot: 1 00:08:15.887 Slot 1 Firmware Revision: 1.0 00:08:15.887 00:08:15.887 00:08:15.887 Commands Supported and Effects 00:08:15.887 ============================== 00:08:15.887 Admin Commands 00:08:15.887 -------------- 00:08:15.887 Delete I/O Submission Queue (00h): Supported 00:08:15.887 Create I/O Submission Queue (01h): Supported 00:08:15.887 Get Log Page (02h): Supported 00:08:15.887 Delete I/O Completion Queue (04h): Supported 00:08:15.887 Create I/O Completion Queue (05h): Supported 00:08:15.887 Identify (06h): Supported 00:08:15.887 Abort (08h): Supported 00:08:15.887 Set Features (09h): Supported 00:08:15.887 Get Features (0Ah): Supported 00:08:15.887 Asynchronous Event Request (0Ch): Supported 00:08:15.887 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.887 Directive Send (19h): Supported 00:08:15.887 Directive Receive (1Ah): Supported 00:08:15.887 Virtualization Management (1Ch): Supported 00:08:15.887 Doorbell Buffer Config (7Ch): Supported 00:08:15.887 Format NVM (80h): Supported LBA-Change 00:08:15.887 I/O Commands 00:08:15.887 ------------ 00:08:15.887 Flush (00h): Supported LBA-Change 00:08:15.887 Write (01h): Supported LBA-Change 00:08:15.887 Read (02h): Supported 00:08:15.887 Compare (05h): Supported 00:08:15.887 Write Zeroes (08h): Supported LBA-Change 00:08:15.887 Dataset Management (09h): Supported LBA-Change 00:08:15.887 Unknown (0Ch): Supported 00:08:15.887 Unknown (12h): Supported 00:08:15.887 Copy (19h): Supported LBA-Change 00:08:15.887 Unknown (1Dh): Supported LBA-Change 00:08:15.887 00:08:15.887 Error Log 00:08:15.887 ========= 00:08:15.887 00:08:15.887 Arbitration 00:08:15.887 =========== 00:08:15.887 Arbitration Burst: no limit 00:08:15.887 00:08:15.887 Power Management 00:08:15.887 ================ 00:08:15.887 Number of Power States: 1 00:08:15.887 Current Power State: Power State #0 00:08:15.887 Power State #0: 00:08:15.887 Max Power: 25.00 W 00:08:15.887 Non-Operational State: Operational 00:08:15.887 Entry Latency: 16 microseconds 00:08:15.887 Exit Latency: 4 microseconds 00:08:15.887 Relative Read Throughput: 0 00:08:15.887 Relative Read Latency: 0 00:08:15.887 Relative Write Throughput: 0 00:08:15.887 Relative Write Latency: 0 00:08:15.887 Idle Power: Not Reported 00:08:15.887 Active Power: Not Reported 00:08:15.887 Non-Operational Permissive Mode: Not Supported 00:08:15.887 00:08:15.887 Health Information 00:08:15.887 ================== 00:08:15.887 Critical Warnings: 00:08:15.887 Available Spare Space: OK 00:08:15.887 Temperature: OK 00:08:15.887 Device Reliability: OK 00:08:15.887 Read Only: No 00:08:15.887 Volatile Memory Backup: OK 00:08:15.887 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.887 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.887 Available Spare: 0% 00:08:15.887 Available Spare Threshold: 0% 00:08:15.887 Life Percentage Used: 0% 00:08:15.887 Data Units Read: 981 00:08:15.887 Data Units Written: 848 00:08:15.887 Host Read Commands: 53320 00:08:15.887 Host Write Commands: 52115 00:08:15.887 Controller Busy Time: 0 minutes 00:08:15.887 Power Cycles: 0 00:08:15.887 Power On Hours: 0 hours 00:08:15.887 Unsafe Shutdowns: 0 00:08:15.887 Unrecoverable Media Errors: 0 00:08:15.887 Lifetime Error Log Entries: 0 00:08:15.887 Warning Temperature Time: 0 minutes 00:08:15.887 Critical Temperature Time: 0 minutes 00:08:15.887 00:08:15.887 Number of Queues 00:08:15.887 ================ 00:08:15.887 Number of I/O Submission Queues: 64 00:08:15.887 Number of I/O Completion Queues: 64 00:08:15.887 00:08:15.887 ZNS Specific Controller Data 00:08:15.887 ============================ 00:08:15.887 Zone Append Size Limit: 0 00:08:15.888 00:08:15.888 00:08:15.888 Active Namespaces 00:08:15.888 ================= 00:08:15.888 Namespace ID:1 00:08:15.888 Error Recovery Timeout: Unlimited 00:08:15.888 Command Set Identifier: NVM (00h) 00:08:15.888 Deallocate: Supported 00:08:15.888 Deallocated/Unwritten Error: Supported 00:08:15.888 Deallocated Read Value: All 0x00 00:08:15.888 Deallocate in Write Zeroes: Not Supported 00:08:15.888 Deallocated Guard Field: 0xFFFF 00:08:15.888 Flush: Supported 00:08:15.888 Reservation: Not Supported 00:08:15.888 Namespace Sharing Capabilities: Private 00:08:15.888 Size (in LBAs): 1310720 (5GiB) 00:08:15.888 Capacity (in LBAs): 1310720 (5GiB) 00:08:15.888 Utilization (in LBAs): 1310720 (5GiB) 00:08:15.888 Thin Provisioning: Not Supported 00:08:15.888 Per-NS Atomic Units: No 00:08:15.888 Maximum Single Source Range Length: 128 00:08:15.888 Maximum Copy Length: 128 00:08:15.888 Maximum Source Range Count: 128 00:08:15.888 NGUID/EUI64 Never Reused: No 00:08:15.888 Namespace Write Protected: No 00:08:15.888 Number of LBA Formats: 8 00:08:15.888 Current LBA Format: LBA Format #04 00:08:15.888 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.888 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.888 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.888 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.888 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.888 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.888 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.888 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.888 00:08:15.888 NVM Specific Namespace Data 00:08:15.888 =========================== 00:08:15.888 Logical Block Storage Tag Mask: 0 00:08:15.888 Protection Information Capabilities: 00:08:15.888 16b Guard Protection Information Storage Tag Support: No 00:08:15.888 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.888 Storage Tag Check Read Support: No 00:08:15.888 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.888 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.888 22:52:54 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:16.150 ===================================================== 00:08:16.150 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:16.150 ===================================================== 00:08:16.150 Controller Capabilities/Features 00:08:16.150 ================================ 00:08:16.150 Vendor ID: 1b36 00:08:16.150 Subsystem Vendor ID: 1af4 00:08:16.150 Serial Number: 12342 00:08:16.150 Model Number: QEMU NVMe Ctrl 00:08:16.150 Firmware Version: 8.0.0 00:08:16.150 Recommended Arb Burst: 6 00:08:16.150 IEEE OUI Identifier: 00 54 52 00:08:16.150 Multi-path I/O 00:08:16.150 May have multiple subsystem ports: No 00:08:16.150 May have multiple controllers: No 00:08:16.150 Associated with SR-IOV VF: No 00:08:16.150 Max Data Transfer Size: 524288 00:08:16.150 Max Number of Namespaces: 256 00:08:16.150 Max Number of I/O Queues: 64 00:08:16.150 NVMe Specification Version (VS): 1.4 00:08:16.150 NVMe Specification Version (Identify): 1.4 00:08:16.150 Maximum Queue Entries: 2048 00:08:16.150 Contiguous Queues Required: Yes 00:08:16.150 Arbitration Mechanisms Supported 00:08:16.150 Weighted Round Robin: Not Supported 00:08:16.150 Vendor Specific: Not Supported 00:08:16.150 Reset Timeout: 7500 ms 00:08:16.150 Doorbell Stride: 4 bytes 00:08:16.150 NVM Subsystem Reset: Not Supported 00:08:16.150 Command Sets Supported 00:08:16.150 NVM Command Set: Supported 00:08:16.150 Boot Partition: Not Supported 00:08:16.150 Memory Page Size Minimum: 4096 bytes 00:08:16.150 Memory Page Size Maximum: 65536 bytes 00:08:16.150 Persistent Memory Region: Not Supported 00:08:16.150 Optional Asynchronous Events Supported 00:08:16.150 Namespace Attribute Notices: Supported 00:08:16.150 Firmware Activation Notices: Not Supported 00:08:16.150 ANA Change Notices: Not Supported 00:08:16.150 PLE Aggregate Log Change Notices: Not Supported 00:08:16.150 LBA Status Info Alert Notices: Not Supported 00:08:16.150 EGE Aggregate Log Change Notices: Not Supported 00:08:16.150 Normal NVM Subsystem Shutdown event: Not Supported 00:08:16.150 Zone Descriptor Change Notices: Not Supported 00:08:16.150 Discovery Log Change Notices: Not Supported 00:08:16.150 Controller Attributes 00:08:16.150 128-bit Host Identifier: Not Supported 00:08:16.150 Non-Operational Permissive Mode: Not Supported 00:08:16.150 NVM Sets: Not Supported 00:08:16.150 Read Recovery Levels: Not Supported 00:08:16.150 Endurance Groups: Not Supported 00:08:16.150 Predictable Latency Mode: Not Supported 00:08:16.150 Traffic Based Keep ALive: Not Supported 00:08:16.150 Namespace Granularity: Not Supported 00:08:16.150 SQ Associations: Not Supported 00:08:16.150 UUID List: Not Supported 00:08:16.150 Multi-Domain Subsystem: Not Supported 00:08:16.150 Fixed Capacity Management: Not Supported 00:08:16.150 Variable Capacity Management: Not Supported 00:08:16.150 Delete Endurance Group: Not Supported 00:08:16.150 Delete NVM Set: Not Supported 00:08:16.150 Extended LBA Formats Supported: Supported 00:08:16.150 Flexible Data Placement Supported: Not Supported 00:08:16.150 00:08:16.150 Controller Memory Buffer Support 00:08:16.150 ================================ 00:08:16.150 Supported: No 00:08:16.150 00:08:16.150 Persistent Memory Region Support 00:08:16.150 ================================ 00:08:16.150 Supported: No 00:08:16.150 00:08:16.150 Admin Command Set Attributes 00:08:16.150 ============================ 00:08:16.150 Security Send/Receive: Not Supported 00:08:16.150 Format NVM: Supported 00:08:16.150 Firmware Activate/Download: Not Supported 00:08:16.150 Namespace Management: Supported 00:08:16.150 Device Self-Test: Not Supported 00:08:16.150 Directives: Supported 00:08:16.150 NVMe-MI: Not Supported 00:08:16.150 Virtualization Management: Not Supported 00:08:16.150 Doorbell Buffer Config: Supported 00:08:16.150 Get LBA Status Capability: Not Supported 00:08:16.150 Command & Feature Lockdown Capability: Not Supported 00:08:16.150 Abort Command Limit: 4 00:08:16.150 Async Event Request Limit: 4 00:08:16.150 Number of Firmware Slots: N/A 00:08:16.150 Firmware Slot 1 Read-Only: N/A 00:08:16.150 Firmware Activation Without Reset: N/A 00:08:16.150 Multiple Update Detection Support: N/A 00:08:16.150 Firmware Update Granularity: No Information Provided 00:08:16.150 Per-Namespace SMART Log: Yes 00:08:16.150 Asymmetric Namespace Access Log Page: Not Supported 00:08:16.150 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:16.150 Command Effects Log Page: Supported 00:08:16.150 Get Log Page Extended Data: Supported 00:08:16.150 Telemetry Log Pages: Not Supported 00:08:16.150 Persistent Event Log Pages: Not Supported 00:08:16.150 Supported Log Pages Log Page: May Support 00:08:16.150 Commands Supported & Effects Log Page: Not Supported 00:08:16.150 Feature Identifiers & Effects Log Page:May Support 00:08:16.150 NVMe-MI Commands & Effects Log Page: May Support 00:08:16.150 Data Area 4 for Telemetry Log: Not Supported 00:08:16.150 Error Log Page Entries Supported: 1 00:08:16.150 Keep Alive: Not Supported 00:08:16.150 00:08:16.150 NVM Command Set Attributes 00:08:16.150 ========================== 00:08:16.150 Submission Queue Entry Size 00:08:16.150 Max: 64 00:08:16.150 Min: 64 00:08:16.150 Completion Queue Entry Size 00:08:16.150 Max: 16 00:08:16.150 Min: 16 00:08:16.150 Number of Namespaces: 256 00:08:16.150 Compare Command: Supported 00:08:16.150 Write Uncorrectable Command: Not Supported 00:08:16.150 Dataset Management Command: Supported 00:08:16.150 Write Zeroes Command: Supported 00:08:16.150 Set Features Save Field: Supported 00:08:16.150 Reservations: Not Supported 00:08:16.150 Timestamp: Supported 00:08:16.150 Copy: Supported 00:08:16.150 Volatile Write Cache: Present 00:08:16.150 Atomic Write Unit (Normal): 1 00:08:16.150 Atomic Write Unit (PFail): 1 00:08:16.150 Atomic Compare & Write Unit: 1 00:08:16.150 Fused Compare & Write: Not Supported 00:08:16.150 Scatter-Gather List 00:08:16.150 SGL Command Set: Supported 00:08:16.150 SGL Keyed: Not Supported 00:08:16.150 SGL Bit Bucket Descriptor: Not Supported 00:08:16.150 SGL Metadata Pointer: Not Supported 00:08:16.150 Oversized SGL: Not Supported 00:08:16.150 SGL Metadata Address: Not Supported 00:08:16.150 SGL Offset: Not Supported 00:08:16.150 Transport SGL Data Block: Not Supported 00:08:16.150 Replay Protected Memory Block: Not Supported 00:08:16.150 00:08:16.150 Firmware Slot Information 00:08:16.150 ========================= 00:08:16.150 Active slot: 1 00:08:16.150 Slot 1 Firmware Revision: 1.0 00:08:16.150 00:08:16.150 00:08:16.150 Commands Supported and Effects 00:08:16.150 ============================== 00:08:16.150 Admin Commands 00:08:16.150 -------------- 00:08:16.150 Delete I/O Submission Queue (00h): Supported 00:08:16.150 Create I/O Submission Queue (01h): Supported 00:08:16.150 Get Log Page (02h): Supported 00:08:16.150 Delete I/O Completion Queue (04h): Supported 00:08:16.150 Create I/O Completion Queue (05h): Supported 00:08:16.150 Identify (06h): Supported 00:08:16.150 Abort (08h): Supported 00:08:16.150 Set Features (09h): Supported 00:08:16.150 Get Features (0Ah): Supported 00:08:16.150 Asynchronous Event Request (0Ch): Supported 00:08:16.150 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:16.150 Directive Send (19h): Supported 00:08:16.151 Directive Receive (1Ah): Supported 00:08:16.151 Virtualization Management (1Ch): Supported 00:08:16.151 Doorbell Buffer Config (7Ch): Supported 00:08:16.151 Format NVM (80h): Supported LBA-Change 00:08:16.151 I/O Commands 00:08:16.151 ------------ 00:08:16.151 Flush (00h): Supported LBA-Change 00:08:16.151 Write (01h): Supported LBA-Change 00:08:16.151 Read (02h): Supported 00:08:16.151 Compare (05h): Supported 00:08:16.151 Write Zeroes (08h): Supported LBA-Change 00:08:16.151 Dataset Management (09h): Supported LBA-Change 00:08:16.151 Unknown (0Ch): Supported 00:08:16.151 Unknown (12h): Supported 00:08:16.151 Copy (19h): Supported LBA-Change 00:08:16.151 Unknown (1Dh): Supported LBA-Change 00:08:16.151 00:08:16.151 Error Log 00:08:16.151 ========= 00:08:16.151 00:08:16.151 Arbitration 00:08:16.151 =========== 00:08:16.151 Arbitration Burst: no limit 00:08:16.151 00:08:16.151 Power Management 00:08:16.151 ================ 00:08:16.151 Number of Power States: 1 00:08:16.151 Current Power State: Power State #0 00:08:16.151 Power State #0: 00:08:16.151 Max Power: 25.00 W 00:08:16.151 Non-Operational State: Operational 00:08:16.151 Entry Latency: 16 microseconds 00:08:16.151 Exit Latency: 4 microseconds 00:08:16.151 Relative Read Throughput: 0 00:08:16.151 Relative Read Latency: 0 00:08:16.151 Relative Write Throughput: 0 00:08:16.151 Relative Write Latency: 0 00:08:16.151 Idle Power: Not Reported 00:08:16.151 Active Power: Not Reported 00:08:16.151 Non-Operational Permissive Mode: Not Supported 00:08:16.151 00:08:16.151 Health Information 00:08:16.151 ================== 00:08:16.151 Critical Warnings: 00:08:16.151 Available Spare Space: OK 00:08:16.151 Temperature: OK 00:08:16.151 Device Reliability: OK 00:08:16.151 Read Only: No 00:08:16.151 Volatile Memory Backup: OK 00:08:16.151 Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.151 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:16.151 Available Spare: 0% 00:08:16.151 Available Spare Threshold: 0% 00:08:16.151 Life Percentage Used: 0% 00:08:16.151 Data Units Read: 2224 00:08:16.151 Data Units Written: 2011 00:08:16.151 Host Read Commands: 110192 00:08:16.151 Host Write Commands: 108461 00:08:16.151 Controller Busy Time: 0 minutes 00:08:16.151 Power Cycles: 0 00:08:16.151 Power On Hours: 0 hours 00:08:16.151 Unsafe Shutdowns: 0 00:08:16.151 Unrecoverable Media Errors: 0 00:08:16.151 Lifetime Error Log Entries: 0 00:08:16.151 Warning Temperature Time: 0 minutes 00:08:16.151 Critical Temperature Time: 0 minutes 00:08:16.151 00:08:16.151 Number of Queues 00:08:16.151 ================ 00:08:16.151 Number of I/O Submission Queues: 64 00:08:16.151 Number of I/O Completion Queues: 64 00:08:16.151 00:08:16.151 ZNS Specific Controller Data 00:08:16.151 ============================ 00:08:16.151 Zone Append Size Limit: 0 00:08:16.151 00:08:16.151 00:08:16.151 Active Namespaces 00:08:16.151 ================= 00:08:16.151 Namespace ID:1 00:08:16.151 Error Recovery Timeout: Unlimited 00:08:16.151 Command Set Identifier: NVM (00h) 00:08:16.151 Deallocate: Supported 00:08:16.151 Deallocated/Unwritten Error: Supported 00:08:16.151 Deallocated Read Value: All 0x00 00:08:16.151 Deallocate in Write Zeroes: Not Supported 00:08:16.151 Deallocated Guard Field: 0xFFFF 00:08:16.151 Flush: Supported 00:08:16.151 Reservation: Not Supported 00:08:16.151 Namespace Sharing Capabilities: Private 00:08:16.151 Size (in LBAs): 1048576 (4GiB) 00:08:16.151 Capacity (in LBAs): 1048576 (4GiB) 00:08:16.151 Utilization (in LBAs): 1048576 (4GiB) 00:08:16.151 Thin Provisioning: Not Supported 00:08:16.151 Per-NS Atomic Units: No 00:08:16.151 Maximum Single Source Range Length: 128 00:08:16.151 Maximum Copy Length: 128 00:08:16.151 Maximum Source Range Count: 128 00:08:16.151 NGUID/EUI64 Never Reused: No 00:08:16.151 Namespace Write Protected: No 00:08:16.151 Number of LBA Formats: 8 00:08:16.151 Current LBA Format: LBA Format #04 00:08:16.151 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.151 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.151 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.151 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.151 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.151 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.151 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.151 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.151 00:08:16.151 NVM Specific Namespace Data 00:08:16.151 =========================== 00:08:16.151 Logical Block Storage Tag Mask: 0 00:08:16.151 Protection Information Capabilities: 00:08:16.151 16b Guard Protection Information Storage Tag Support: No 00:08:16.151 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.151 Storage Tag Check Read Support: No 00:08:16.151 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Namespace ID:2 00:08:16.151 Error Recovery Timeout: Unlimited 00:08:16.151 Command Set Identifier: NVM (00h) 00:08:16.151 Deallocate: Supported 00:08:16.151 Deallocated/Unwritten Error: Supported 00:08:16.151 Deallocated Read Value: All 0x00 00:08:16.151 Deallocate in Write Zeroes: Not Supported 00:08:16.151 Deallocated Guard Field: 0xFFFF 00:08:16.151 Flush: Supported 00:08:16.151 Reservation: Not Supported 00:08:16.151 Namespace Sharing Capabilities: Private 00:08:16.151 Size (in LBAs): 1048576 (4GiB) 00:08:16.151 Capacity (in LBAs): 1048576 (4GiB) 00:08:16.151 Utilization (in LBAs): 1048576 (4GiB) 00:08:16.151 Thin Provisioning: Not Supported 00:08:16.151 Per-NS Atomic Units: No 00:08:16.151 Maximum Single Source Range Length: 128 00:08:16.151 Maximum Copy Length: 128 00:08:16.151 Maximum Source Range Count: 128 00:08:16.151 NGUID/EUI64 Never Reused: No 00:08:16.151 Namespace Write Protected: No 00:08:16.151 Number of LBA Formats: 8 00:08:16.151 Current LBA Format: LBA Format #04 00:08:16.151 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.151 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.151 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.151 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.151 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.151 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.151 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.151 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.151 00:08:16.151 NVM Specific Namespace Data 00:08:16.151 =========================== 00:08:16.151 Logical Block Storage Tag Mask: 0 00:08:16.151 Protection Information Capabilities: 00:08:16.151 16b Guard Protection Information Storage Tag Support: No 00:08:16.151 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.151 Storage Tag Check Read Support: No 00:08:16.151 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.151 Namespace ID:3 00:08:16.151 Error Recovery Timeout: Unlimited 00:08:16.151 Command Set Identifier: NVM (00h) 00:08:16.151 Deallocate: Supported 00:08:16.151 Deallocated/Unwritten Error: Supported 00:08:16.151 Deallocated Read Value: All 0x00 00:08:16.151 Deallocate in Write Zeroes: Not Supported 00:08:16.151 Deallocated Guard Field: 0xFFFF 00:08:16.151 Flush: Supported 00:08:16.151 Reservation: Not Supported 00:08:16.151 Namespace Sharing Capabilities: Private 00:08:16.151 Size (in LBAs): 1048576 (4GiB) 00:08:16.151 Capacity (in LBAs): 1048576 (4GiB) 00:08:16.151 Utilization (in LBAs): 1048576 (4GiB) 00:08:16.151 Thin Provisioning: Not Supported 00:08:16.151 Per-NS Atomic Units: No 00:08:16.151 Maximum Single Source Range Length: 128 00:08:16.151 Maximum Copy Length: 128 00:08:16.152 Maximum Source Range Count: 128 00:08:16.152 NGUID/EUI64 Never Reused: No 00:08:16.152 Namespace Write Protected: No 00:08:16.152 Number of LBA Formats: 8 00:08:16.152 Current LBA Format: LBA Format #04 00:08:16.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.152 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.152 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.152 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.152 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.152 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.152 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.152 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.152 00:08:16.152 NVM Specific Namespace Data 00:08:16.152 =========================== 00:08:16.152 Logical Block Storage Tag Mask: 0 00:08:16.152 Protection Information Capabilities: 00:08:16.152 16b Guard Protection Information Storage Tag Support: No 00:08:16.152 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.152 Storage Tag Check Read Support: No 00:08:16.152 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.152 22:52:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:16.152 22:52:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:16.413 ===================================================== 00:08:16.413 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:16.413 ===================================================== 00:08:16.413 Controller Capabilities/Features 00:08:16.413 ================================ 00:08:16.413 Vendor ID: 1b36 00:08:16.413 Subsystem Vendor ID: 1af4 00:08:16.413 Serial Number: 12343 00:08:16.413 Model Number: QEMU NVMe Ctrl 00:08:16.413 Firmware Version: 8.0.0 00:08:16.413 Recommended Arb Burst: 6 00:08:16.413 IEEE OUI Identifier: 00 54 52 00:08:16.413 Multi-path I/O 00:08:16.413 May have multiple subsystem ports: No 00:08:16.413 May have multiple controllers: Yes 00:08:16.413 Associated with SR-IOV VF: No 00:08:16.413 Max Data Transfer Size: 524288 00:08:16.413 Max Number of Namespaces: 256 00:08:16.413 Max Number of I/O Queues: 64 00:08:16.413 NVMe Specification Version (VS): 1.4 00:08:16.413 NVMe Specification Version (Identify): 1.4 00:08:16.413 Maximum Queue Entries: 2048 00:08:16.413 Contiguous Queues Required: Yes 00:08:16.413 Arbitration Mechanisms Supported 00:08:16.413 Weighted Round Robin: Not Supported 00:08:16.413 Vendor Specific: Not Supported 00:08:16.413 Reset Timeout: 7500 ms 00:08:16.413 Doorbell Stride: 4 bytes 00:08:16.413 NVM Subsystem Reset: Not Supported 00:08:16.413 Command Sets Supported 00:08:16.413 NVM Command Set: Supported 00:08:16.413 Boot Partition: Not Supported 00:08:16.413 Memory Page Size Minimum: 4096 bytes 00:08:16.413 Memory Page Size Maximum: 65536 bytes 00:08:16.413 Persistent Memory Region: Not Supported 00:08:16.413 Optional Asynchronous Events Supported 00:08:16.413 Namespace Attribute Notices: Supported 00:08:16.413 Firmware Activation Notices: Not Supported 00:08:16.413 ANA Change Notices: Not Supported 00:08:16.413 PLE Aggregate Log Change Notices: Not Supported 00:08:16.413 LBA Status Info Alert Notices: Not Supported 00:08:16.413 EGE Aggregate Log Change Notices: Not Supported 00:08:16.413 Normal NVM Subsystem Shutdown event: Not Supported 00:08:16.413 Zone Descriptor Change Notices: Not Supported 00:08:16.413 Discovery Log Change Notices: Not Supported 00:08:16.413 Controller Attributes 00:08:16.413 128-bit Host Identifier: Not Supported 00:08:16.413 Non-Operational Permissive Mode: Not Supported 00:08:16.414 NVM Sets: Not Supported 00:08:16.414 Read Recovery Levels: Not Supported 00:08:16.414 Endurance Groups: Supported 00:08:16.414 Predictable Latency Mode: Not Supported 00:08:16.414 Traffic Based Keep ALive: Not Supported 00:08:16.414 Namespace Granularity: Not Supported 00:08:16.414 SQ Associations: Not Supported 00:08:16.414 UUID List: Not Supported 00:08:16.414 Multi-Domain Subsystem: Not Supported 00:08:16.414 Fixed Capacity Management: Not Supported 00:08:16.414 Variable Capacity Management: Not Supported 00:08:16.414 Delete Endurance Group: Not Supported 00:08:16.414 Delete NVM Set: Not Supported 00:08:16.414 Extended LBA Formats Supported: Supported 00:08:16.414 Flexible Data Placement Supported: Supported 00:08:16.414 00:08:16.414 Controller Memory Buffer Support 00:08:16.414 ================================ 00:08:16.414 Supported: No 00:08:16.414 00:08:16.414 Persistent Memory Region Support 00:08:16.414 ================================ 00:08:16.414 Supported: No 00:08:16.414 00:08:16.414 Admin Command Set Attributes 00:08:16.414 ============================ 00:08:16.414 Security Send/Receive: Not Supported 00:08:16.414 Format NVM: Supported 00:08:16.414 Firmware Activate/Download: Not Supported 00:08:16.414 Namespace Management: Supported 00:08:16.414 Device Self-Test: Not Supported 00:08:16.414 Directives: Supported 00:08:16.414 NVMe-MI: Not Supported 00:08:16.414 Virtualization Management: Not Supported 00:08:16.414 Doorbell Buffer Config: Supported 00:08:16.414 Get LBA Status Capability: Not Supported 00:08:16.414 Command & Feature Lockdown Capability: Not Supported 00:08:16.414 Abort Command Limit: 4 00:08:16.414 Async Event Request Limit: 4 00:08:16.414 Number of Firmware Slots: N/A 00:08:16.414 Firmware Slot 1 Read-Only: N/A 00:08:16.414 Firmware Activation Without Reset: N/A 00:08:16.414 Multiple Update Detection Support: N/A 00:08:16.414 Firmware Update Granularity: No Information Provided 00:08:16.414 Per-Namespace SMART Log: Yes 00:08:16.414 Asymmetric Namespace Access Log Page: Not Supported 00:08:16.414 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:16.414 Command Effects Log Page: Supported 00:08:16.414 Get Log Page Extended Data: Supported 00:08:16.414 Telemetry Log Pages: Not Supported 00:08:16.414 Persistent Event Log Pages: Not Supported 00:08:16.414 Supported Log Pages Log Page: May Support 00:08:16.414 Commands Supported & Effects Log Page: Not Supported 00:08:16.414 Feature Identifiers & Effects Log Page:May Support 00:08:16.414 NVMe-MI Commands & Effects Log Page: May Support 00:08:16.414 Data Area 4 for Telemetry Log: Not Supported 00:08:16.414 Error Log Page Entries Supported: 1 00:08:16.414 Keep Alive: Not Supported 00:08:16.414 00:08:16.414 NVM Command Set Attributes 00:08:16.414 ========================== 00:08:16.414 Submission Queue Entry Size 00:08:16.414 Max: 64 00:08:16.414 Min: 64 00:08:16.414 Completion Queue Entry Size 00:08:16.414 Max: 16 00:08:16.414 Min: 16 00:08:16.414 Number of Namespaces: 256 00:08:16.414 Compare Command: Supported 00:08:16.414 Write Uncorrectable Command: Not Supported 00:08:16.414 Dataset Management Command: Supported 00:08:16.414 Write Zeroes Command: Supported 00:08:16.414 Set Features Save Field: Supported 00:08:16.414 Reservations: Not Supported 00:08:16.414 Timestamp: Supported 00:08:16.414 Copy: Supported 00:08:16.414 Volatile Write Cache: Present 00:08:16.414 Atomic Write Unit (Normal): 1 00:08:16.414 Atomic Write Unit (PFail): 1 00:08:16.414 Atomic Compare & Write Unit: 1 00:08:16.414 Fused Compare & Write: Not Supported 00:08:16.414 Scatter-Gather List 00:08:16.414 SGL Command Set: Supported 00:08:16.414 SGL Keyed: Not Supported 00:08:16.414 SGL Bit Bucket Descriptor: Not Supported 00:08:16.414 SGL Metadata Pointer: Not Supported 00:08:16.414 Oversized SGL: Not Supported 00:08:16.414 SGL Metadata Address: Not Supported 00:08:16.414 SGL Offset: Not Supported 00:08:16.414 Transport SGL Data Block: Not Supported 00:08:16.414 Replay Protected Memory Block: Not Supported 00:08:16.414 00:08:16.414 Firmware Slot Information 00:08:16.414 ========================= 00:08:16.414 Active slot: 1 00:08:16.414 Slot 1 Firmware Revision: 1.0 00:08:16.414 00:08:16.414 00:08:16.414 Commands Supported and Effects 00:08:16.414 ============================== 00:08:16.414 Admin Commands 00:08:16.414 -------------- 00:08:16.414 Delete I/O Submission Queue (00h): Supported 00:08:16.414 Create I/O Submission Queue (01h): Supported 00:08:16.414 Get Log Page (02h): Supported 00:08:16.414 Delete I/O Completion Queue (04h): Supported 00:08:16.414 Create I/O Completion Queue (05h): Supported 00:08:16.414 Identify (06h): Supported 00:08:16.414 Abort (08h): Supported 00:08:16.414 Set Features (09h): Supported 00:08:16.414 Get Features (0Ah): Supported 00:08:16.414 Asynchronous Event Request (0Ch): Supported 00:08:16.414 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:16.414 Directive Send (19h): Supported 00:08:16.414 Directive Receive (1Ah): Supported 00:08:16.414 Virtualization Management (1Ch): Supported 00:08:16.414 Doorbell Buffer Config (7Ch): Supported 00:08:16.414 Format NVM (80h): Supported LBA-Change 00:08:16.414 I/O Commands 00:08:16.414 ------------ 00:08:16.414 Flush (00h): Supported LBA-Change 00:08:16.414 Write (01h): Supported LBA-Change 00:08:16.414 Read (02h): Supported 00:08:16.414 Compare (05h): Supported 00:08:16.414 Write Zeroes (08h): Supported LBA-Change 00:08:16.414 Dataset Management (09h): Supported LBA-Change 00:08:16.414 Unknown (0Ch): Supported 00:08:16.414 Unknown (12h): Supported 00:08:16.414 Copy (19h): Supported LBA-Change 00:08:16.414 Unknown (1Dh): Supported LBA-Change 00:08:16.414 00:08:16.414 Error Log 00:08:16.414 ========= 00:08:16.414 00:08:16.414 Arbitration 00:08:16.414 =========== 00:08:16.414 Arbitration Burst: no limit 00:08:16.414 00:08:16.414 Power Management 00:08:16.414 ================ 00:08:16.414 Number of Power States: 1 00:08:16.414 Current Power State: Power State #0 00:08:16.414 Power State #0: 00:08:16.414 Max Power: 25.00 W 00:08:16.414 Non-Operational State: Operational 00:08:16.414 Entry Latency: 16 microseconds 00:08:16.414 Exit Latency: 4 microseconds 00:08:16.414 Relative Read Throughput: 0 00:08:16.414 Relative Read Latency: 0 00:08:16.414 Relative Write Throughput: 0 00:08:16.414 Relative Write Latency: 0 00:08:16.414 Idle Power: Not Reported 00:08:16.414 Active Power: Not Reported 00:08:16.414 Non-Operational Permissive Mode: Not Supported 00:08:16.414 00:08:16.414 Health Information 00:08:16.414 ================== 00:08:16.414 Critical Warnings: 00:08:16.414 Available Spare Space: OK 00:08:16.414 Temperature: OK 00:08:16.414 Device Reliability: OK 00:08:16.414 Read Only: No 00:08:16.414 Volatile Memory Backup: OK 00:08:16.414 Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.414 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:16.414 Available Spare: 0% 00:08:16.414 Available Spare Threshold: 0% 00:08:16.414 Life Percentage Used: 0% 00:08:16.414 Data Units Read: 1100 00:08:16.414 Data Units Written: 1029 00:08:16.414 Host Read Commands: 39634 00:08:16.414 Host Write Commands: 39057 00:08:16.414 Controller Busy Time: 0 minutes 00:08:16.414 Power Cycles: 0 00:08:16.414 Power On Hours: 0 hours 00:08:16.414 Unsafe Shutdowns: 0 00:08:16.414 Unrecoverable Media Errors: 0 00:08:16.414 Lifetime Error Log Entries: 0 00:08:16.414 Warning Temperature Time: 0 minutes 00:08:16.414 Critical Temperature Time: 0 minutes 00:08:16.414 00:08:16.414 Number of Queues 00:08:16.414 ================ 00:08:16.414 Number of I/O Submission Queues: 64 00:08:16.414 Number of I/O Completion Queues: 64 00:08:16.414 00:08:16.414 ZNS Specific Controller Data 00:08:16.414 ============================ 00:08:16.414 Zone Append Size Limit: 0 00:08:16.414 00:08:16.414 00:08:16.414 Active Namespaces 00:08:16.414 ================= 00:08:16.414 Namespace ID:1 00:08:16.414 Error Recovery Timeout: Unlimited 00:08:16.414 Command Set Identifier: NVM (00h) 00:08:16.414 Deallocate: Supported 00:08:16.414 Deallocated/Unwritten Error: Supported 00:08:16.414 Deallocated Read Value: All 0x00 00:08:16.414 Deallocate in Write Zeroes: Not Supported 00:08:16.414 Deallocated Guard Field: 0xFFFF 00:08:16.414 Flush: Supported 00:08:16.414 Reservation: Not Supported 00:08:16.414 Namespace Sharing Capabilities: Multiple Controllers 00:08:16.414 Size (in LBAs): 262144 (1GiB) 00:08:16.414 Capacity (in LBAs): 262144 (1GiB) 00:08:16.414 Utilization (in LBAs): 262144 (1GiB) 00:08:16.414 Thin Provisioning: Not Supported 00:08:16.414 Per-NS Atomic Units: No 00:08:16.414 Maximum Single Source Range Length: 128 00:08:16.415 Maximum Copy Length: 128 00:08:16.415 Maximum Source Range Count: 128 00:08:16.415 NGUID/EUI64 Never Reused: No 00:08:16.415 Namespace Write Protected: No 00:08:16.415 Endurance group ID: 1 00:08:16.415 Number of LBA Formats: 8 00:08:16.415 Current LBA Format: LBA Format #04 00:08:16.415 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.415 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.415 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.415 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.415 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.415 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.415 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.415 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.415 00:08:16.415 Get Feature FDP: 00:08:16.415 ================ 00:08:16.415 Enabled: Yes 00:08:16.415 FDP configuration index: 0 00:08:16.415 00:08:16.415 FDP configurations log page 00:08:16.415 =========================== 00:08:16.415 Number of FDP configurations: 1 00:08:16.415 Version: 0 00:08:16.415 Size: 112 00:08:16.415 FDP Configuration Descriptor: 0 00:08:16.415 Descriptor Size: 96 00:08:16.415 Reclaim Group Identifier format: 2 00:08:16.415 FDP Volatile Write Cache: Not Present 00:08:16.415 FDP Configuration: Valid 00:08:16.415 Vendor Specific Size: 0 00:08:16.415 Number of Reclaim Groups: 2 00:08:16.415 Number of Recalim Unit Handles: 8 00:08:16.415 Max Placement Identifiers: 128 00:08:16.415 Number of Namespaces Suppprted: 256 00:08:16.415 Reclaim unit Nominal Size: 6000000 bytes 00:08:16.415 Estimated Reclaim Unit Time Limit: Not Reported 00:08:16.415 RUH Desc #000: RUH Type: Initially Isolated 00:08:16.415 RUH Desc #001: RUH Type: Initially Isolated 00:08:16.415 RUH Desc #002: RUH Type: Initially Isolated 00:08:16.415 RUH Desc #003: RUH Type: Initially Isolated 00:08:16.415 RUH Desc #004: RUH Type: Initially Isolated 00:08:16.415 RUH Desc #005: RUH Type: Initially Isolated 00:08:16.415 RUH Desc #006: RUH Type: Initially Isolated 00:08:16.415 RUH Desc #007: RUH Type: Initially Isolated 00:08:16.415 00:08:16.415 FDP reclaim unit handle usage log page 00:08:16.415 ====================================== 00:08:16.415 Number of Reclaim Unit Handles: 8 00:08:16.415 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:16.415 RUH Usage Desc #001: RUH Attributes: Unused 00:08:16.415 RUH Usage Desc #002: RUH Attributes: Unused 00:08:16.415 RUH Usage Desc #003: RUH Attributes: Unused 00:08:16.415 RUH Usage Desc #004: RUH Attributes: Unused 00:08:16.415 RUH Usage Desc #005: RUH Attributes: Unused 00:08:16.415 RUH Usage Desc #006: RUH Attributes: Unused 00:08:16.415 RUH Usage Desc #007: RUH Attributes: Unused 00:08:16.415 00:08:16.415 FDP statistics log page 00:08:16.415 ======================= 00:08:16.415 Host bytes with metadata written: 629673984 00:08:16.415 Media bytes with metadata written: 629829632 00:08:16.415 Media bytes erased: 0 00:08:16.415 00:08:16.415 FDP events log page 00:08:16.415 =================== 00:08:16.415 Number of FDP events: 0 00:08:16.415 00:08:16.415 NVM Specific Namespace Data 00:08:16.415 =========================== 00:08:16.415 Logical Block Storage Tag Mask: 0 00:08:16.415 Protection Information Capabilities: 00:08:16.415 16b Guard Protection Information Storage Tag Support: No 00:08:16.415 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.415 Storage Tag Check Read Support: No 00:08:16.415 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.415 00:08:16.415 real 0m1.338s 00:08:16.415 user 0m0.455s 00:08:16.415 sys 0m0.642s 00:08:16.415 22:52:55 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.415 ************************************ 00:08:16.415 END TEST nvme_identify 00:08:16.415 22:52:55 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:16.415 ************************************ 00:08:16.415 22:52:55 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:16.415 22:52:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.415 22:52:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.415 22:52:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.415 ************************************ 00:08:16.415 START TEST nvme_perf 00:08:16.415 ************************************ 00:08:16.415 22:52:55 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:16.415 22:52:55 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:17.803 Initializing NVMe Controllers 00:08:17.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.803 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.803 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.803 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:17.803 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:17.803 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:17.803 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:17.803 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:17.803 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:17.803 Initialization complete. Launching workers. 00:08:17.803 ======================================================== 00:08:17.803 Latency(us) 00:08:17.803 Device Information : IOPS MiB/s Average min max 00:08:17.803 PCIE (0000:00:10.0) NSID 1 from core 0: 7534.59 88.30 16998.26 10052.93 40600.95 00:08:17.803 PCIE (0000:00:11.0) NSID 1 from core 0: 7534.59 88.30 16981.96 9160.04 40561.79 00:08:17.803 PCIE (0000:00:13.0) NSID 1 from core 0: 7534.59 88.30 16960.38 7164.51 41500.34 00:08:17.803 PCIE (0000:00:12.0) NSID 1 from core 0: 7534.59 88.30 16938.48 6289.77 40968.08 00:08:17.803 PCIE (0000:00:12.0) NSID 2 from core 0: 7534.59 88.30 16914.63 5424.34 40860.14 00:08:17.803 PCIE (0000:00:12.0) NSID 3 from core 0: 7598.44 89.04 16749.51 4749.95 30304.66 00:08:17.803 ======================================================== 00:08:17.803 Total : 45271.38 530.52 16923.62 4749.95 41500.34 00:08:17.803 00:08:17.803 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:17.803 ================================================================================= 00:08:17.803 1.00000% : 13510.498us 00:08:17.804 10.00000% : 14720.394us 00:08:17.804 25.00000% : 15526.991us 00:08:17.804 50.00000% : 16535.237us 00:08:17.804 75.00000% : 17845.957us 00:08:17.804 90.00000% : 19459.151us 00:08:17.804 95.00000% : 20669.046us 00:08:17.804 98.00000% : 23391.311us 00:08:17.804 99.00000% : 29440.788us 00:08:17.804 99.50000% : 39724.898us 00:08:17.804 99.90000% : 40531.495us 00:08:17.804 99.99000% : 40733.145us 00:08:17.804 99.99900% : 40733.145us 00:08:17.804 99.99990% : 40733.145us 00:08:17.804 99.99999% : 40733.145us 00:08:17.804 00:08:17.804 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:17.804 ================================================================================= 00:08:17.804 1.00000% : 13510.498us 00:08:17.804 10.00000% : 14720.394us 00:08:17.804 25.00000% : 15526.991us 00:08:17.804 50.00000% : 16535.237us 00:08:17.804 75.00000% : 17745.132us 00:08:17.804 90.00000% : 19358.326us 00:08:17.804 95.00000% : 20265.748us 00:08:17.804 98.00000% : 25306.978us 00:08:17.804 99.00000% : 29440.788us 00:08:17.804 99.50000% : 39724.898us 00:08:17.804 99.90000% : 40531.495us 00:08:17.804 99.99000% : 40733.145us 00:08:17.804 99.99900% : 40733.145us 00:08:17.804 99.99990% : 40733.145us 00:08:17.804 99.99999% : 40733.145us 00:08:17.804 00:08:17.804 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:17.804 ================================================================================= 00:08:17.804 1.00000% : 13308.849us 00:08:17.804 10.00000% : 14720.394us 00:08:17.804 25.00000% : 15526.991us 00:08:17.804 50.00000% : 16535.237us 00:08:17.804 75.00000% : 17745.132us 00:08:17.804 90.00000% : 19358.326us 00:08:17.804 95.00000% : 20467.397us 00:08:17.804 98.00000% : 24097.083us 00:08:17.804 99.00000% : 30449.034us 00:08:17.804 99.50000% : 40733.145us 00:08:17.804 99.90000% : 41539.742us 00:08:17.804 99.99000% : 41539.742us 00:08:17.804 99.99900% : 41539.742us 00:08:17.804 99.99990% : 41539.742us 00:08:17.804 99.99999% : 41539.742us 00:08:17.804 00:08:17.804 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:17.804 ================================================================================= 00:08:17.804 1.00000% : 13409.674us 00:08:17.804 10.00000% : 14619.569us 00:08:17.804 25.00000% : 15526.991us 00:08:17.804 50.00000% : 16535.237us 00:08:17.804 75.00000% : 17845.957us 00:08:17.804 90.00000% : 19358.326us 00:08:17.804 95.00000% : 20265.748us 00:08:17.804 98.00000% : 22887.188us 00:08:17.804 99.00000% : 29642.437us 00:08:17.804 99.50000% : 40329.846us 00:08:17.804 99.90000% : 40934.794us 00:08:17.804 99.99000% : 41136.443us 00:08:17.804 99.99900% : 41136.443us 00:08:17.804 99.99990% : 41136.443us 00:08:17.804 99.99999% : 41136.443us 00:08:17.804 00:08:17.804 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:17.804 ================================================================================= 00:08:17.804 1.00000% : 12502.252us 00:08:17.804 10.00000% : 14720.394us 00:08:17.804 25.00000% : 15426.166us 00:08:17.804 50.00000% : 16535.237us 00:08:17.804 75.00000% : 17845.957us 00:08:17.804 90.00000% : 19358.326us 00:08:17.804 95.00000% : 20467.397us 00:08:17.804 98.00000% : 23088.837us 00:08:17.804 99.00000% : 29440.788us 00:08:17.804 99.50000% : 39926.548us 00:08:17.804 99.90000% : 40733.145us 00:08:17.804 99.99000% : 40934.794us 00:08:17.804 99.99900% : 40934.794us 00:08:17.804 99.99990% : 40934.794us 00:08:17.804 99.99999% : 40934.794us 00:08:17.804 00:08:17.804 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:17.804 ================================================================================= 00:08:17.804 1.00000% : 11393.182us 00:08:17.804 10.00000% : 14720.394us 00:08:17.804 25.00000% : 15426.166us 00:08:17.804 50.00000% : 16535.237us 00:08:17.804 75.00000% : 17946.782us 00:08:17.804 90.00000% : 19358.326us 00:08:17.804 95.00000% : 20265.748us 00:08:17.804 98.00000% : 22080.591us 00:08:17.804 99.00000% : 23290.486us 00:08:17.804 99.50000% : 29642.437us 00:08:17.804 99.90000% : 30247.385us 00:08:17.804 99.99000% : 30449.034us 00:08:17.804 99.99900% : 30449.034us 00:08:17.804 99.99990% : 30449.034us 00:08:17.804 99.99999% : 30449.034us 00:08:17.804 00:08:17.804 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:17.804 ============================================================================== 00:08:17.804 Range in us Cumulative IO count 00:08:17.804 10032.049 - 10082.462: 0.0265% ( 2) 00:08:17.804 10082.462 - 10132.874: 0.0794% ( 4) 00:08:17.804 10132.874 - 10183.286: 0.0927% ( 1) 00:08:17.804 10183.286 - 10233.698: 0.1721% ( 6) 00:08:17.804 10233.698 - 10284.111: 0.1854% ( 1) 00:08:17.804 10284.111 - 10334.523: 0.2383% ( 4) 00:08:17.804 10334.523 - 10384.935: 0.2648% ( 2) 00:08:17.804 10384.935 - 10435.348: 0.3178% ( 4) 00:08:17.804 10435.348 - 10485.760: 0.3575% ( 3) 00:08:17.804 10485.760 - 10536.172: 0.3840% ( 2) 00:08:17.804 10536.172 - 10586.585: 0.4502% ( 5) 00:08:17.804 10586.585 - 10636.997: 0.4767% ( 2) 00:08:17.804 10636.997 - 10687.409: 0.5297% ( 4) 00:08:17.804 10687.409 - 10737.822: 0.5694% ( 3) 00:08:17.804 10737.822 - 10788.234: 0.6091% ( 3) 00:08:17.804 10788.234 - 10838.646: 0.6356% ( 2) 00:08:17.804 10838.646 - 10889.058: 0.7018% ( 5) 00:08:17.804 10889.058 - 10939.471: 0.7283% ( 2) 00:08:17.804 10939.471 - 10989.883: 0.7812% ( 4) 00:08:17.804 10989.883 - 11040.295: 0.8077% ( 2) 00:08:17.804 11040.295 - 11090.708: 0.8475% ( 3) 00:08:17.804 13107.200 - 13208.025: 0.8607% ( 1) 00:08:17.804 13208.025 - 13308.849: 0.9137% ( 4) 00:08:17.804 13308.849 - 13409.674: 0.9534% ( 3) 00:08:17.804 13409.674 - 13510.498: 1.0858% ( 10) 00:08:17.804 13510.498 - 13611.323: 1.2050% ( 9) 00:08:17.804 13611.323 - 13712.148: 1.4036% ( 15) 00:08:17.804 13712.148 - 13812.972: 1.7082% ( 23) 00:08:17.804 13812.972 - 13913.797: 2.2113% ( 38) 00:08:17.804 13913.797 - 14014.622: 2.9793% ( 58) 00:08:17.804 14014.622 - 14115.446: 3.6547% ( 51) 00:08:17.804 14115.446 - 14216.271: 4.4227% ( 58) 00:08:17.804 14216.271 - 14317.095: 5.1642% ( 56) 00:08:17.804 14317.095 - 14417.920: 6.1573% ( 75) 00:08:17.804 14417.920 - 14518.745: 7.3358% ( 89) 00:08:17.804 14518.745 - 14619.569: 8.5275% ( 90) 00:08:17.804 14619.569 - 14720.394: 10.1695% ( 124) 00:08:17.804 14720.394 - 14821.218: 12.0763% ( 144) 00:08:17.804 14821.218 - 14922.043: 13.7315% ( 125) 00:08:17.804 14922.043 - 15022.868: 15.8104% ( 157) 00:08:17.804 15022.868 - 15123.692: 17.9952% ( 165) 00:08:17.804 15123.692 - 15224.517: 20.1139% ( 160) 00:08:17.805 15224.517 - 15325.342: 22.3385% ( 168) 00:08:17.805 15325.342 - 15426.166: 24.7352% ( 181) 00:08:17.805 15426.166 - 15526.991: 26.9200% ( 165) 00:08:17.805 15526.991 - 15627.815: 29.2903% ( 179) 00:08:17.805 15627.815 - 15728.640: 31.8194% ( 191) 00:08:17.805 15728.640 - 15829.465: 34.3353% ( 190) 00:08:17.805 15829.465 - 15930.289: 36.8909% ( 193) 00:08:17.805 15930.289 - 16031.114: 39.4068% ( 190) 00:08:17.805 16031.114 - 16131.938: 41.5916% ( 165) 00:08:17.805 16131.938 - 16232.763: 44.4386% ( 215) 00:08:17.805 16232.763 - 16333.588: 46.6764% ( 169) 00:08:17.805 16333.588 - 16434.412: 49.0996% ( 183) 00:08:17.805 16434.412 - 16535.237: 51.0593% ( 148) 00:08:17.805 16535.237 - 16636.062: 53.1780% ( 160) 00:08:17.805 16636.062 - 16736.886: 55.2834% ( 159) 00:08:17.805 16736.886 - 16837.711: 57.4020% ( 160) 00:08:17.805 16837.711 - 16938.535: 59.3750% ( 149) 00:08:17.805 16938.535 - 17039.360: 61.2818% ( 144) 00:08:17.805 17039.360 - 17140.185: 63.1621% ( 142) 00:08:17.805 17140.185 - 17241.009: 65.3072% ( 162) 00:08:17.805 17241.009 - 17341.834: 67.0418% ( 131) 00:08:17.805 17341.834 - 17442.658: 68.9883% ( 147) 00:08:17.805 17442.658 - 17543.483: 70.6171% ( 123) 00:08:17.805 17543.483 - 17644.308: 72.0604% ( 109) 00:08:17.805 17644.308 - 17745.132: 73.5567% ( 113) 00:08:17.805 17745.132 - 17845.957: 75.1059% ( 117) 00:08:17.805 17845.957 - 17946.782: 76.8008% ( 128) 00:08:17.805 17946.782 - 18047.606: 78.0720% ( 96) 00:08:17.805 18047.606 - 18148.431: 79.3697% ( 98) 00:08:17.805 18148.431 - 18249.255: 80.8528% ( 112) 00:08:17.805 18249.255 - 18350.080: 81.9518% ( 83) 00:08:17.805 18350.080 - 18450.905: 82.8522% ( 68) 00:08:17.805 18450.905 - 18551.729: 83.7659% ( 69) 00:08:17.805 18551.729 - 18652.554: 84.4677% ( 53) 00:08:17.805 18652.554 - 18753.378: 85.5535% ( 82) 00:08:17.805 18753.378 - 18854.203: 86.4407% ( 67) 00:08:17.805 18854.203 - 18955.028: 87.0498% ( 46) 00:08:17.805 18955.028 - 19055.852: 87.8708% ( 62) 00:08:17.805 19055.852 - 19156.677: 88.3607% ( 37) 00:08:17.805 19156.677 - 19257.502: 89.1155% ( 57) 00:08:17.805 19257.502 - 19358.326: 89.6319% ( 39) 00:08:17.805 19358.326 - 19459.151: 90.1218% ( 37) 00:08:17.805 19459.151 - 19559.975: 90.6382% ( 39) 00:08:17.805 19559.975 - 19660.800: 91.1944% ( 42) 00:08:17.805 19660.800 - 19761.625: 91.6976% ( 38) 00:08:17.805 19761.625 - 19862.449: 92.0948% ( 30) 00:08:17.805 19862.449 - 19963.274: 92.4921% ( 30) 00:08:17.805 19963.274 - 20064.098: 93.0217% ( 40) 00:08:17.805 20064.098 - 20164.923: 93.4322% ( 31) 00:08:17.805 20164.923 - 20265.748: 93.9221% ( 37) 00:08:17.805 20265.748 - 20366.572: 94.3459% ( 32) 00:08:17.805 20366.572 - 20467.397: 94.6107% ( 20) 00:08:17.805 20467.397 - 20568.222: 94.8490% ( 18) 00:08:17.805 20568.222 - 20669.046: 95.1139% ( 20) 00:08:17.805 20669.046 - 20769.871: 95.4714% ( 27) 00:08:17.805 20769.871 - 20870.695: 95.7892% ( 24) 00:08:17.805 20870.695 - 20971.520: 96.0408% ( 19) 00:08:17.805 20971.520 - 21072.345: 96.3189% ( 21) 00:08:17.805 21072.345 - 21173.169: 96.5440% ( 17) 00:08:17.805 21173.169 - 21273.994: 96.6499% ( 8) 00:08:17.805 21273.994 - 21374.818: 96.8353% ( 14) 00:08:17.805 21374.818 - 21475.643: 96.9280% ( 7) 00:08:17.805 21475.643 - 21576.468: 97.0471% ( 9) 00:08:17.805 21576.468 - 21677.292: 97.1398% ( 7) 00:08:17.805 21677.292 - 21778.117: 97.2060% ( 5) 00:08:17.805 21778.117 - 21878.942: 97.2458% ( 3) 00:08:17.805 21878.942 - 21979.766: 97.3120% ( 5) 00:08:17.805 21979.766 - 22080.591: 97.3385% ( 2) 00:08:17.805 22080.591 - 22181.415: 97.4047% ( 5) 00:08:17.805 22181.415 - 22282.240: 97.4709% ( 5) 00:08:17.805 22282.240 - 22383.065: 97.5106% ( 3) 00:08:17.805 22383.065 - 22483.889: 97.5371% ( 2) 00:08:17.805 22483.889 - 22584.714: 97.5900% ( 4) 00:08:17.805 22584.714 - 22685.538: 97.6298% ( 3) 00:08:17.805 22685.538 - 22786.363: 97.7357% ( 8) 00:08:17.805 22786.363 - 22887.188: 97.8549% ( 9) 00:08:17.805 22887.188 - 22988.012: 97.8681% ( 1) 00:08:17.805 22988.012 - 23088.837: 97.8814% ( 1) 00:08:17.805 23088.837 - 23189.662: 97.9476% ( 5) 00:08:17.805 23189.662 - 23290.486: 97.9873% ( 3) 00:08:17.805 23290.486 - 23391.311: 98.1197% ( 10) 00:08:17.805 23391.311 - 23492.135: 98.1329% ( 1) 00:08:17.805 23492.135 - 23592.960: 98.1462% ( 1) 00:08:17.805 23592.960 - 23693.785: 98.2124% ( 5) 00:08:17.805 23693.785 - 23794.609: 98.2521% ( 3) 00:08:17.805 23794.609 - 23895.434: 98.3051% ( 4) 00:08:17.805 28029.243 - 28230.892: 98.3713% ( 5) 00:08:17.805 28230.892 - 28432.542: 98.4507% ( 6) 00:08:17.805 28432.542 - 28634.191: 98.5434% ( 7) 00:08:17.805 28634.191 - 28835.840: 98.6626% ( 9) 00:08:17.805 28835.840 - 29037.489: 98.7685% ( 8) 00:08:17.805 29037.489 - 29239.138: 98.9142% ( 11) 00:08:17.805 29239.138 - 29440.788: 99.0069% ( 7) 00:08:17.805 29440.788 - 29642.437: 99.0996% ( 7) 00:08:17.805 29642.437 - 29844.086: 99.1525% ( 4) 00:08:17.805 38918.302 - 39119.951: 99.2188% ( 5) 00:08:17.805 39119.951 - 39321.600: 99.3114% ( 7) 00:08:17.805 39321.600 - 39523.249: 99.4041% ( 7) 00:08:17.805 39523.249 - 39724.898: 99.5365% ( 10) 00:08:17.805 39724.898 - 39926.548: 99.6160% ( 6) 00:08:17.805 39926.548 - 40128.197: 99.7352% ( 9) 00:08:17.805 40128.197 - 40329.846: 99.8543% ( 9) 00:08:17.805 40329.846 - 40531.495: 99.9603% ( 8) 00:08:17.805 40531.495 - 40733.145: 100.0000% ( 3) 00:08:17.805 00:08:17.805 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:17.805 ============================================================================== 00:08:17.805 Range in us Cumulative IO count 00:08:17.805 9124.628 - 9175.040: 0.0132% ( 1) 00:08:17.805 9175.040 - 9225.452: 0.0662% ( 4) 00:08:17.805 9225.452 - 9275.865: 0.1059% ( 3) 00:08:17.805 9275.865 - 9326.277: 0.1721% ( 5) 00:08:17.805 9326.277 - 9376.689: 0.2119% ( 3) 00:08:17.805 9376.689 - 9427.102: 0.2781% ( 5) 00:08:17.805 9427.102 - 9477.514: 0.3178% ( 3) 00:08:17.805 9477.514 - 9527.926: 0.3708% ( 4) 00:08:17.805 9527.926 - 9578.338: 0.4105% ( 3) 00:08:17.805 9578.338 - 9628.751: 0.4370% ( 2) 00:08:17.805 9628.751 - 9679.163: 0.4767% ( 3) 00:08:17.805 9679.163 - 9729.575: 0.5164% ( 3) 00:08:17.805 9729.575 - 9779.988: 0.5561% ( 3) 00:08:17.805 9779.988 - 9830.400: 0.6091% ( 4) 00:08:17.805 9830.400 - 9880.812: 0.6488% ( 3) 00:08:17.805 9880.812 - 9931.225: 0.6886% ( 3) 00:08:17.805 9931.225 - 9981.637: 0.7415% ( 4) 00:08:17.805 9981.637 - 10032.049: 0.7945% ( 4) 00:08:17.805 10032.049 - 10082.462: 0.8342% ( 3) 00:08:17.805 10082.462 - 10132.874: 0.8475% ( 1) 00:08:17.806 13107.200 - 13208.025: 0.8739% ( 2) 00:08:17.806 13208.025 - 13308.849: 0.9269% ( 4) 00:08:17.806 13308.849 - 13409.674: 0.9799% ( 4) 00:08:17.806 13409.674 - 13510.498: 1.1123% ( 10) 00:08:17.806 13510.498 - 13611.323: 1.2712% ( 12) 00:08:17.806 13611.323 - 13712.148: 1.6022% ( 25) 00:08:17.806 13712.148 - 13812.972: 1.8406% ( 18) 00:08:17.806 13812.972 - 13913.797: 2.1451% ( 23) 00:08:17.806 13913.797 - 14014.622: 2.6218% ( 36) 00:08:17.806 14014.622 - 14115.446: 3.2442% ( 47) 00:08:17.806 14115.446 - 14216.271: 3.9989% ( 57) 00:08:17.806 14216.271 - 14317.095: 4.9656% ( 73) 00:08:17.806 14317.095 - 14417.920: 6.1441% ( 89) 00:08:17.806 14417.920 - 14518.745: 7.2961% ( 87) 00:08:17.806 14518.745 - 14619.569: 8.7659% ( 111) 00:08:17.806 14619.569 - 14720.394: 10.3549% ( 120) 00:08:17.806 14720.394 - 14821.218: 11.8247% ( 111) 00:08:17.806 14821.218 - 14922.043: 13.4401% ( 122) 00:08:17.806 14922.043 - 15022.868: 15.5588% ( 160) 00:08:17.806 15022.868 - 15123.692: 17.6377% ( 157) 00:08:17.806 15123.692 - 15224.517: 19.8358% ( 166) 00:08:17.806 15224.517 - 15325.342: 22.2722% ( 184) 00:08:17.806 15325.342 - 15426.166: 24.5498% ( 172) 00:08:17.806 15426.166 - 15526.991: 26.6817% ( 161) 00:08:17.806 15526.991 - 15627.815: 28.6944% ( 152) 00:08:17.806 15627.815 - 15728.640: 30.9852% ( 173) 00:08:17.806 15728.640 - 15829.465: 33.5143% ( 191) 00:08:17.806 15829.465 - 15930.289: 35.8713% ( 178) 00:08:17.806 15930.289 - 16031.114: 38.0561% ( 165) 00:08:17.806 16031.114 - 16131.938: 40.4793% ( 183) 00:08:17.806 16131.938 - 16232.763: 43.2601% ( 210) 00:08:17.806 16232.763 - 16333.588: 45.8422% ( 195) 00:08:17.806 16333.588 - 16434.412: 48.0403% ( 166) 00:08:17.806 16434.412 - 16535.237: 50.4370% ( 181) 00:08:17.806 16535.237 - 16636.062: 52.6483% ( 167) 00:08:17.806 16636.062 - 16736.886: 54.9656% ( 175) 00:08:17.806 16736.886 - 16837.711: 57.2299% ( 171) 00:08:17.806 16837.711 - 16938.535: 59.4280% ( 166) 00:08:17.806 16938.535 - 17039.360: 61.7320% ( 174) 00:08:17.806 17039.360 - 17140.185: 63.9168% ( 165) 00:08:17.806 17140.185 - 17241.009: 65.9958% ( 157) 00:08:17.806 17241.009 - 17341.834: 67.8231% ( 138) 00:08:17.806 17341.834 - 17442.658: 69.7961% ( 149) 00:08:17.806 17442.658 - 17543.483: 71.7029% ( 144) 00:08:17.806 17543.483 - 17644.308: 73.5302% ( 138) 00:08:17.806 17644.308 - 17745.132: 75.0927% ( 118) 00:08:17.806 17745.132 - 17845.957: 76.6022% ( 114) 00:08:17.806 17845.957 - 17946.782: 78.1647% ( 118) 00:08:17.806 17946.782 - 18047.606: 79.3829% ( 92) 00:08:17.806 18047.606 - 18148.431: 80.5747% ( 90) 00:08:17.806 18148.431 - 18249.255: 81.6340% ( 80) 00:08:17.806 18249.255 - 18350.080: 82.5609% ( 70) 00:08:17.806 18350.080 - 18450.905: 83.4216% ( 65) 00:08:17.806 18450.905 - 18551.729: 84.2293% ( 61) 00:08:17.806 18551.729 - 18652.554: 85.1165% ( 67) 00:08:17.806 18652.554 - 18753.378: 85.9507% ( 63) 00:08:17.806 18753.378 - 18854.203: 86.9041% ( 72) 00:08:17.806 18854.203 - 18955.028: 87.6986% ( 60) 00:08:17.806 18955.028 - 19055.852: 88.4799% ( 59) 00:08:17.806 19055.852 - 19156.677: 89.1022% ( 47) 00:08:17.806 19156.677 - 19257.502: 89.7246% ( 47) 00:08:17.806 19257.502 - 19358.326: 90.3072% ( 44) 00:08:17.806 19358.326 - 19459.151: 90.8369% ( 40) 00:08:17.806 19459.151 - 19559.975: 91.4195% ( 44) 00:08:17.806 19559.975 - 19660.800: 92.1478% ( 55) 00:08:17.806 19660.800 - 19761.625: 92.8761% ( 55) 00:08:17.806 19761.625 - 19862.449: 93.4984% ( 47) 00:08:17.806 19862.449 - 19963.274: 93.9751% ( 36) 00:08:17.806 19963.274 - 20064.098: 94.4518% ( 36) 00:08:17.806 20064.098 - 20164.923: 94.8358% ( 29) 00:08:17.806 20164.923 - 20265.748: 95.2331% ( 30) 00:08:17.806 20265.748 - 20366.572: 95.6038% ( 28) 00:08:17.806 20366.572 - 20467.397: 95.8819% ( 21) 00:08:17.806 20467.397 - 20568.222: 96.1335% ( 19) 00:08:17.806 20568.222 - 20669.046: 96.3718% ( 18) 00:08:17.806 20669.046 - 20769.871: 96.5704% ( 15) 00:08:17.806 20769.871 - 20870.695: 96.6102% ( 3) 00:08:17.806 21374.818 - 21475.643: 96.6896% ( 6) 00:08:17.806 21475.643 - 21576.468: 96.7426% ( 4) 00:08:17.806 21576.468 - 21677.292: 96.7956% ( 4) 00:08:17.806 21677.292 - 21778.117: 96.8353% ( 3) 00:08:17.806 21778.117 - 21878.942: 96.8882% ( 4) 00:08:17.806 21878.942 - 21979.766: 96.9677% ( 6) 00:08:17.806 21979.766 - 22080.591: 97.0339% ( 5) 00:08:17.806 22080.591 - 22181.415: 97.1001% ( 5) 00:08:17.806 22181.415 - 22282.240: 97.1398% ( 3) 00:08:17.806 22282.240 - 22383.065: 97.2060% ( 5) 00:08:17.806 22383.065 - 22483.889: 97.2722% ( 5) 00:08:17.806 22483.889 - 22584.714: 97.3782% ( 8) 00:08:17.806 22584.714 - 22685.538: 97.5106% ( 10) 00:08:17.806 22685.538 - 22786.363: 97.5768% ( 5) 00:08:17.806 24601.206 - 24702.031: 97.6033% ( 2) 00:08:17.806 24702.031 - 24802.855: 97.6695% ( 5) 00:08:17.806 24802.855 - 24903.680: 97.7357% ( 5) 00:08:17.806 24903.680 - 25004.505: 97.8019% ( 5) 00:08:17.806 25004.505 - 25105.329: 97.8814% ( 6) 00:08:17.806 25105.329 - 25206.154: 97.9476% ( 5) 00:08:17.806 25206.154 - 25306.978: 98.0005% ( 4) 00:08:17.806 25407.803 - 25508.628: 98.0535% ( 4) 00:08:17.806 25508.628 - 25609.452: 98.1065% ( 4) 00:08:17.806 25609.452 - 25710.277: 98.1594% ( 4) 00:08:17.806 25710.277 - 25811.102: 98.2124% ( 4) 00:08:17.806 25811.102 - 26012.751: 98.3051% ( 7) 00:08:17.806 28029.243 - 28230.892: 98.3845% ( 6) 00:08:17.806 28230.892 - 28432.542: 98.5037% ( 9) 00:08:17.806 28432.542 - 28634.191: 98.6229% ( 9) 00:08:17.806 28634.191 - 28835.840: 98.7288% ( 8) 00:08:17.806 28835.840 - 29037.489: 98.8347% ( 8) 00:08:17.806 29037.489 - 29239.138: 98.9274% ( 7) 00:08:17.806 29239.138 - 29440.788: 99.0466% ( 9) 00:08:17.806 29440.788 - 29642.437: 99.1525% ( 8) 00:08:17.806 38918.302 - 39119.951: 99.1923% ( 3) 00:08:17.806 39119.951 - 39321.600: 99.3247% ( 10) 00:08:17.806 39321.600 - 39523.249: 99.4439% ( 9) 00:08:17.806 39523.249 - 39724.898: 99.5498% ( 8) 00:08:17.806 39724.898 - 39926.548: 99.6425% ( 7) 00:08:17.806 39926.548 - 40128.197: 99.7352% ( 7) 00:08:17.806 40128.197 - 40329.846: 99.8543% ( 9) 00:08:17.806 40329.846 - 40531.495: 99.9735% ( 9) 00:08:17.806 40531.495 - 40733.145: 100.0000% ( 2) 00:08:17.806 00:08:17.806 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:17.806 ============================================================================== 00:08:17.806 Range in us Cumulative IO count 00:08:17.806 7158.548 - 7208.960: 0.0397% ( 3) 00:08:17.806 7208.960 - 7259.372: 0.0927% ( 4) 00:08:17.806 7259.372 - 7309.785: 0.1324% ( 3) 00:08:17.806 7309.785 - 7360.197: 0.1854% ( 4) 00:08:17.806 7360.197 - 7410.609: 0.2516% ( 5) 00:08:17.806 7410.609 - 7461.022: 0.3178% ( 5) 00:08:17.806 7461.022 - 7511.434: 0.3708% ( 4) 00:08:17.806 7511.434 - 7561.846: 0.4237% ( 4) 00:08:17.806 7561.846 - 7612.258: 0.4767% ( 4) 00:08:17.806 7612.258 - 7662.671: 0.5164% ( 3) 00:08:17.806 7662.671 - 7713.083: 0.5694% ( 4) 00:08:17.806 7713.083 - 7763.495: 0.6224% ( 4) 00:08:17.806 7763.495 - 7813.908: 0.6488% ( 2) 00:08:17.807 7813.908 - 7864.320: 0.6886% ( 3) 00:08:17.807 7864.320 - 7914.732: 0.7415% ( 4) 00:08:17.807 7914.732 - 7965.145: 0.7812% ( 3) 00:08:17.807 7965.145 - 8015.557: 0.8210% ( 3) 00:08:17.807 8015.557 - 8065.969: 0.8475% ( 2) 00:08:17.807 12905.551 - 13006.375: 0.8607% ( 1) 00:08:17.807 13006.375 - 13107.200: 0.9004% ( 3) 00:08:17.807 13107.200 - 13208.025: 0.9666% ( 5) 00:08:17.807 13208.025 - 13308.849: 1.0328% ( 5) 00:08:17.807 13308.849 - 13409.674: 1.1123% ( 6) 00:08:17.807 13409.674 - 13510.498: 1.2977% ( 14) 00:08:17.807 13510.498 - 13611.323: 1.4831% ( 14) 00:08:17.807 13611.323 - 13712.148: 1.8406% ( 27) 00:08:17.807 13712.148 - 13812.972: 2.3305% ( 37) 00:08:17.807 13812.972 - 13913.797: 2.8734% ( 41) 00:08:17.807 13913.797 - 14014.622: 3.3766% ( 38) 00:08:17.807 14014.622 - 14115.446: 4.0651% ( 52) 00:08:17.807 14115.446 - 14216.271: 5.0053% ( 71) 00:08:17.807 14216.271 - 14317.095: 6.0779% ( 81) 00:08:17.807 14317.095 - 14417.920: 7.2034% ( 85) 00:08:17.807 14417.920 - 14518.745: 8.4349% ( 93) 00:08:17.807 14518.745 - 14619.569: 9.8649% ( 108) 00:08:17.807 14619.569 - 14720.394: 11.4539% ( 120) 00:08:17.807 14720.394 - 14821.218: 13.0164% ( 118) 00:08:17.807 14821.218 - 14922.043: 14.7775% ( 133) 00:08:17.807 14922.043 - 15022.868: 16.9227% ( 162) 00:08:17.807 15022.868 - 15123.692: 19.0413% ( 160) 00:08:17.807 15123.692 - 15224.517: 20.8422% ( 136) 00:08:17.807 15224.517 - 15325.342: 22.8946% ( 155) 00:08:17.807 15325.342 - 15426.166: 24.9603% ( 156) 00:08:17.807 15426.166 - 15526.991: 26.9597% ( 151) 00:08:17.807 15526.991 - 15627.815: 28.9857% ( 153) 00:08:17.807 15627.815 - 15728.640: 31.0911% ( 159) 00:08:17.807 15728.640 - 15829.465: 33.1435% ( 155) 00:08:17.807 15829.465 - 15930.289: 35.2887% ( 162) 00:08:17.807 15930.289 - 16031.114: 37.5265% ( 169) 00:08:17.807 16031.114 - 16131.938: 39.8173% ( 173) 00:08:17.807 16131.938 - 16232.763: 42.2802% ( 186) 00:08:17.807 16232.763 - 16333.588: 44.8226% ( 192) 00:08:17.807 16333.588 - 16434.412: 47.3649% ( 192) 00:08:17.807 16434.412 - 16535.237: 50.0265% ( 201) 00:08:17.807 16535.237 - 16636.062: 52.6086% ( 195) 00:08:17.807 16636.062 - 16736.886: 55.0847% ( 187) 00:08:17.807 16736.886 - 16837.711: 57.5874% ( 189) 00:08:17.807 16837.711 - 16938.535: 59.9841% ( 181) 00:08:17.807 16938.535 - 17039.360: 62.5000% ( 190) 00:08:17.807 17039.360 - 17140.185: 64.6054% ( 159) 00:08:17.807 17140.185 - 17241.009: 66.6578% ( 155) 00:08:17.807 17241.009 - 17341.834: 68.9089% ( 170) 00:08:17.807 17341.834 - 17442.658: 70.8554% ( 147) 00:08:17.807 17442.658 - 17543.483: 72.5238% ( 126) 00:08:17.807 17543.483 - 17644.308: 74.1790% ( 125) 00:08:17.807 17644.308 - 17745.132: 75.5694% ( 105) 00:08:17.807 17745.132 - 17845.957: 76.7214% ( 87) 00:08:17.807 17845.957 - 17946.782: 77.8469% ( 85) 00:08:17.807 17946.782 - 18047.606: 79.0651% ( 92) 00:08:17.807 18047.606 - 18148.431: 80.4555% ( 105) 00:08:17.807 18148.431 - 18249.255: 81.6737% ( 92) 00:08:17.807 18249.255 - 18350.080: 82.7463% ( 81) 00:08:17.807 18350.080 - 18450.905: 83.6997% ( 72) 00:08:17.807 18450.905 - 18551.729: 84.7325% ( 78) 00:08:17.807 18551.729 - 18652.554: 85.6859% ( 72) 00:08:17.807 18652.554 - 18753.378: 86.5069% ( 62) 00:08:17.807 18753.378 - 18854.203: 87.3543% ( 64) 00:08:17.807 18854.203 - 18955.028: 88.0826% ( 55) 00:08:17.807 18955.028 - 19055.852: 88.7977% ( 54) 00:08:17.807 19055.852 - 19156.677: 89.4333% ( 48) 00:08:17.807 19156.677 - 19257.502: 89.9894% ( 42) 00:08:17.807 19257.502 - 19358.326: 90.5456% ( 42) 00:08:17.807 19358.326 - 19459.151: 91.0885% ( 41) 00:08:17.807 19459.151 - 19559.975: 91.6578% ( 43) 00:08:17.807 19559.975 - 19660.800: 92.1478% ( 37) 00:08:17.807 19660.800 - 19761.625: 92.6377% ( 37) 00:08:17.807 19761.625 - 19862.449: 92.9820% ( 26) 00:08:17.807 19862.449 - 19963.274: 93.3792% ( 30) 00:08:17.807 19963.274 - 20064.098: 93.7368% ( 27) 00:08:17.807 20064.098 - 20164.923: 94.0678% ( 25) 00:08:17.807 20164.923 - 20265.748: 94.4650% ( 30) 00:08:17.807 20265.748 - 20366.572: 94.8358% ( 28) 00:08:17.807 20366.572 - 20467.397: 95.2993% ( 35) 00:08:17.807 20467.397 - 20568.222: 95.6700% ( 28) 00:08:17.807 20568.222 - 20669.046: 96.0011% ( 25) 00:08:17.807 20669.046 - 20769.871: 96.1997% ( 15) 00:08:17.807 20769.871 - 20870.695: 96.3056% ( 8) 00:08:17.807 20870.695 - 20971.520: 96.3851% ( 6) 00:08:17.807 20971.520 - 21072.345: 96.4513% ( 5) 00:08:17.807 21072.345 - 21173.169: 96.5042% ( 4) 00:08:17.807 21173.169 - 21273.994: 96.5572% ( 4) 00:08:17.807 21273.994 - 21374.818: 96.6102% ( 4) 00:08:17.807 21979.766 - 22080.591: 96.6499% ( 3) 00:08:17.807 22080.591 - 22181.415: 96.7029% ( 4) 00:08:17.807 22181.415 - 22282.240: 96.7426% ( 3) 00:08:17.807 22282.240 - 22383.065: 96.8088% ( 5) 00:08:17.807 22383.065 - 22483.889: 96.8750% ( 5) 00:08:17.807 22483.889 - 22584.714: 96.9412% ( 5) 00:08:17.807 22584.714 - 22685.538: 96.9942% ( 4) 00:08:17.807 22685.538 - 22786.363: 97.0604% ( 5) 00:08:17.807 22786.363 - 22887.188: 97.1266% ( 5) 00:08:17.807 22887.188 - 22988.012: 97.1928% ( 5) 00:08:17.807 22988.012 - 23088.837: 97.2722% ( 6) 00:08:17.807 23088.837 - 23189.662: 97.3385% ( 5) 00:08:17.807 23189.662 - 23290.486: 97.4444% ( 8) 00:08:17.807 23290.486 - 23391.311: 97.5636% ( 9) 00:08:17.807 23391.311 - 23492.135: 97.6562% ( 7) 00:08:17.807 23492.135 - 23592.960: 97.7489% ( 7) 00:08:17.807 23592.960 - 23693.785: 97.8019% ( 4) 00:08:17.807 23693.785 - 23794.609: 97.8681% ( 5) 00:08:17.807 23794.609 - 23895.434: 97.9211% ( 4) 00:08:17.807 23895.434 - 23996.258: 97.9873% ( 5) 00:08:17.807 23996.258 - 24097.083: 98.0403% ( 4) 00:08:17.807 24097.083 - 24197.908: 98.0932% ( 4) 00:08:17.807 24197.908 - 24298.732: 98.1329% ( 3) 00:08:17.807 24298.732 - 24399.557: 98.1859% ( 4) 00:08:17.807 24399.557 - 24500.382: 98.2256% ( 3) 00:08:17.807 24500.382 - 24601.206: 98.2654% ( 3) 00:08:17.807 24601.206 - 24702.031: 98.3051% ( 3) 00:08:17.807 28634.191 - 28835.840: 98.3183% ( 1) 00:08:17.807 28835.840 - 29037.489: 98.3845% ( 5) 00:08:17.807 29037.489 - 29239.138: 98.4507% ( 5) 00:08:17.807 29239.138 - 29440.788: 98.5169% ( 5) 00:08:17.807 29440.788 - 29642.437: 98.6096% ( 7) 00:08:17.807 29642.437 - 29844.086: 98.7288% ( 9) 00:08:17.807 29844.086 - 30045.735: 98.8612% ( 10) 00:08:17.807 30045.735 - 30247.385: 98.9936% ( 10) 00:08:17.807 30247.385 - 30449.034: 99.1128% ( 9) 00:08:17.807 30449.034 - 30650.683: 99.1525% ( 3) 00:08:17.807 39724.898 - 39926.548: 99.1790% ( 2) 00:08:17.807 39926.548 - 40128.197: 99.2585% ( 6) 00:08:17.807 40128.197 - 40329.846: 99.3512% ( 7) 00:08:17.807 40329.846 - 40531.495: 99.4571% ( 8) 00:08:17.807 40531.495 - 40733.145: 99.5630% ( 8) 00:08:17.807 40733.145 - 40934.794: 99.6822% ( 9) 00:08:17.807 40934.794 - 41136.443: 99.7881% ( 8) 00:08:17.807 41136.443 - 41338.092: 99.8941% ( 8) 00:08:17.807 41338.092 - 41539.742: 100.0000% ( 8) 00:08:17.807 00:08:17.807 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:17.807 ============================================================================== 00:08:17.807 Range in us Cumulative IO count 00:08:17.807 6276.332 - 6301.538: 0.0132% ( 1) 00:08:17.807 6301.538 - 6326.745: 0.0397% ( 2) 00:08:17.807 6326.745 - 6351.951: 0.0662% ( 2) 00:08:17.807 6351.951 - 6377.157: 0.0794% ( 1) 00:08:17.807 6377.157 - 6402.363: 0.1059% ( 2) 00:08:17.807 6402.363 - 6427.569: 0.1324% ( 2) 00:08:17.807 6427.569 - 6452.775: 0.1457% ( 1) 00:08:17.807 6452.775 - 6503.188: 0.1986% ( 4) 00:08:17.808 6503.188 - 6553.600: 0.2913% ( 7) 00:08:17.808 6553.600 - 6604.012: 0.3310% ( 3) 00:08:17.808 6604.012 - 6654.425: 0.3840% ( 4) 00:08:17.808 6654.425 - 6704.837: 0.4237% ( 3) 00:08:17.808 6704.837 - 6755.249: 0.4767% ( 4) 00:08:17.808 6755.249 - 6805.662: 0.5164% ( 3) 00:08:17.808 6805.662 - 6856.074: 0.5561% ( 3) 00:08:17.808 6856.074 - 6906.486: 0.6091% ( 4) 00:08:17.808 6906.486 - 6956.898: 0.6488% ( 3) 00:08:17.808 6956.898 - 7007.311: 0.6886% ( 3) 00:08:17.808 7007.311 - 7057.723: 0.7283% ( 3) 00:08:17.808 7057.723 - 7108.135: 0.7680% ( 3) 00:08:17.808 7108.135 - 7158.548: 0.8077% ( 3) 00:08:17.808 7158.548 - 7208.960: 0.8475% ( 3) 00:08:17.808 13208.025 - 13308.849: 0.8739% ( 2) 00:08:17.808 13308.849 - 13409.674: 1.1123% ( 18) 00:08:17.808 13409.674 - 13510.498: 1.4433% ( 25) 00:08:17.808 13510.498 - 13611.323: 2.0127% ( 43) 00:08:17.808 13611.323 - 13712.148: 2.5026% ( 37) 00:08:17.808 13712.148 - 13812.972: 2.9661% ( 35) 00:08:17.808 13812.972 - 13913.797: 3.4296% ( 35) 00:08:17.808 13913.797 - 14014.622: 3.9460% ( 39) 00:08:17.808 14014.622 - 14115.446: 4.7140% ( 58) 00:08:17.808 14115.446 - 14216.271: 5.7601% ( 79) 00:08:17.808 14216.271 - 14317.095: 6.7929% ( 78) 00:08:17.808 14317.095 - 14417.920: 7.9184% ( 85) 00:08:17.808 14417.920 - 14518.745: 9.0440% ( 85) 00:08:17.808 14518.745 - 14619.569: 10.2357% ( 90) 00:08:17.808 14619.569 - 14720.394: 11.4142% ( 89) 00:08:17.808 14720.394 - 14821.218: 12.9767% ( 118) 00:08:17.808 14821.218 - 14922.043: 14.7113% ( 131) 00:08:17.808 14922.043 - 15022.868: 16.5916% ( 142) 00:08:17.808 15022.868 - 15123.692: 18.3395% ( 132) 00:08:17.808 15123.692 - 15224.517: 20.2860% ( 147) 00:08:17.808 15224.517 - 15325.342: 22.0471% ( 133) 00:08:17.808 15325.342 - 15426.166: 24.0069% ( 148) 00:08:17.808 15426.166 - 15526.991: 25.9004% ( 143) 00:08:17.808 15526.991 - 15627.815: 27.8999% ( 151) 00:08:17.808 15627.815 - 15728.640: 30.1377% ( 169) 00:08:17.808 15728.640 - 15829.465: 32.5212% ( 180) 00:08:17.808 15829.465 - 15930.289: 34.7060% ( 165) 00:08:17.808 15930.289 - 16031.114: 37.1822% ( 187) 00:08:17.808 16031.114 - 16131.938: 39.7511% ( 194) 00:08:17.808 16131.938 - 16232.763: 42.3067% ( 193) 00:08:17.808 16232.763 - 16333.588: 44.7828% ( 187) 00:08:17.808 16333.588 - 16434.412: 47.5238% ( 207) 00:08:17.808 16434.412 - 16535.237: 50.1986% ( 202) 00:08:17.808 16535.237 - 16636.062: 52.7410% ( 192) 00:08:17.808 16636.062 - 16736.886: 55.3231% ( 195) 00:08:17.808 16736.886 - 16837.711: 58.0376% ( 205) 00:08:17.808 16837.711 - 16938.535: 60.6992% ( 201) 00:08:17.808 16938.535 - 17039.360: 62.9237% ( 168) 00:08:17.808 17039.360 - 17140.185: 64.9364% ( 152) 00:08:17.808 17140.185 - 17241.009: 66.9359% ( 151) 00:08:17.808 17241.009 - 17341.834: 68.6176% ( 127) 00:08:17.808 17341.834 - 17442.658: 70.0344% ( 107) 00:08:17.808 17442.658 - 17543.483: 71.4645% ( 108) 00:08:17.808 17543.483 - 17644.308: 72.7357% ( 96) 00:08:17.808 17644.308 - 17745.132: 73.9539% ( 92) 00:08:17.808 17745.132 - 17845.957: 75.2913% ( 101) 00:08:17.808 17845.957 - 17946.782: 76.4036% ( 84) 00:08:17.808 17946.782 - 18047.606: 77.3835% ( 74) 00:08:17.808 18047.606 - 18148.431: 78.2971% ( 69) 00:08:17.808 18148.431 - 18249.255: 79.3565% ( 80) 00:08:17.808 18249.255 - 18350.080: 80.5085% ( 87) 00:08:17.808 18350.080 - 18450.905: 81.5678% ( 80) 00:08:17.808 18450.905 - 18551.729: 82.6536% ( 82) 00:08:17.808 18551.729 - 18652.554: 83.7791% ( 85) 00:08:17.808 18652.554 - 18753.378: 84.7325% ( 72) 00:08:17.808 18753.378 - 18854.203: 85.6594% ( 70) 00:08:17.808 18854.203 - 18955.028: 86.6658% ( 76) 00:08:17.808 18955.028 - 19055.852: 87.7119% ( 79) 00:08:17.808 19055.852 - 19156.677: 88.6520% ( 71) 00:08:17.808 19156.677 - 19257.502: 89.5657% ( 69) 00:08:17.808 19257.502 - 19358.326: 90.5058% ( 71) 00:08:17.808 19358.326 - 19459.151: 91.2474% ( 56) 00:08:17.808 19459.151 - 19559.975: 91.8829% ( 48) 00:08:17.808 19559.975 - 19660.800: 92.4126% ( 40) 00:08:17.808 19660.800 - 19761.625: 92.9820% ( 43) 00:08:17.808 19761.625 - 19862.449: 93.5249% ( 41) 00:08:17.808 19862.449 - 19963.274: 94.0546% ( 40) 00:08:17.808 19963.274 - 20064.098: 94.4915% ( 33) 00:08:17.808 20064.098 - 20164.923: 94.9153% ( 32) 00:08:17.808 20164.923 - 20265.748: 95.3390% ( 32) 00:08:17.808 20265.748 - 20366.572: 95.7892% ( 34) 00:08:17.808 20366.572 - 20467.397: 96.1335% ( 26) 00:08:17.808 20467.397 - 20568.222: 96.3586% ( 17) 00:08:17.808 20568.222 - 20669.046: 96.6102% ( 19) 00:08:17.808 20669.046 - 20769.871: 96.8088% ( 15) 00:08:17.808 20769.871 - 20870.695: 96.9544% ( 11) 00:08:17.808 20870.695 - 20971.520: 97.0736% ( 9) 00:08:17.808 20971.520 - 21072.345: 97.1796% ( 8) 00:08:17.808 21072.345 - 21173.169: 97.2590% ( 6) 00:08:17.808 21173.169 - 21273.994: 97.3120% ( 4) 00:08:17.808 21273.994 - 21374.818: 97.3782% ( 5) 00:08:17.808 21374.818 - 21475.643: 97.4444% ( 5) 00:08:17.808 21475.643 - 21576.468: 97.4576% ( 1) 00:08:17.808 21979.766 - 22080.591: 97.5106% ( 4) 00:08:17.808 22080.591 - 22181.415: 97.5900% ( 6) 00:08:17.808 22181.415 - 22282.240: 97.6562% ( 5) 00:08:17.808 22282.240 - 22383.065: 97.7225% ( 5) 00:08:17.808 22383.065 - 22483.889: 97.7887% ( 5) 00:08:17.808 22483.889 - 22584.714: 97.8549% ( 5) 00:08:17.808 22584.714 - 22685.538: 97.9211% ( 5) 00:08:17.808 22685.538 - 22786.363: 97.9740% ( 4) 00:08:17.808 22786.363 - 22887.188: 98.0270% ( 4) 00:08:17.808 22887.188 - 22988.012: 98.0932% ( 5) 00:08:17.808 22988.012 - 23088.837: 98.1462% ( 4) 00:08:17.808 23088.837 - 23189.662: 98.1992% ( 4) 00:08:17.808 23189.662 - 23290.486: 98.2521% ( 4) 00:08:17.808 23290.486 - 23391.311: 98.3051% ( 4) 00:08:17.808 28432.542 - 28634.191: 98.4110% ( 8) 00:08:17.808 28634.191 - 28835.840: 98.5434% ( 10) 00:08:17.808 28835.840 - 29037.489: 98.6626% ( 9) 00:08:17.808 29037.489 - 29239.138: 98.7950% ( 10) 00:08:17.808 29239.138 - 29440.788: 98.9142% ( 9) 00:08:17.808 29440.788 - 29642.437: 99.0334% ( 9) 00:08:17.808 29642.437 - 29844.086: 99.1525% ( 9) 00:08:17.808 39523.249 - 39724.898: 99.2452% ( 7) 00:08:17.808 39724.898 - 39926.548: 99.3776% ( 10) 00:08:17.808 39926.548 - 40128.197: 99.4968% ( 9) 00:08:17.808 40128.197 - 40329.846: 99.6160% ( 9) 00:08:17.808 40329.846 - 40531.495: 99.7352% ( 9) 00:08:17.808 40531.495 - 40733.145: 99.8543% ( 9) 00:08:17.808 40733.145 - 40934.794: 99.9735% ( 9) 00:08:17.808 40934.794 - 41136.443: 100.0000% ( 2) 00:08:17.808 00:08:17.808 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:17.808 ============================================================================== 00:08:17.808 Range in us Cumulative IO count 00:08:17.808 5419.323 - 5444.529: 0.0397% ( 3) 00:08:17.808 5444.529 - 5469.735: 0.0662% ( 2) 00:08:17.809 5469.735 - 5494.942: 0.0794% ( 1) 00:08:17.809 5494.942 - 5520.148: 0.1059% ( 2) 00:08:17.809 5520.148 - 5545.354: 0.1192% ( 1) 00:08:17.809 5545.354 - 5570.560: 0.1457% ( 2) 00:08:17.809 5570.560 - 5595.766: 0.1589% ( 1) 00:08:17.809 5595.766 - 5620.972: 0.1854% ( 2) 00:08:17.809 5620.972 - 5646.178: 0.2119% ( 2) 00:08:17.809 5646.178 - 5671.385: 0.2251% ( 1) 00:08:17.809 5671.385 - 5696.591: 0.2516% ( 2) 00:08:17.809 5696.591 - 5721.797: 0.2781% ( 2) 00:08:17.809 5721.797 - 5747.003: 0.2913% ( 1) 00:08:17.809 5747.003 - 5772.209: 0.3178% ( 2) 00:08:17.809 5772.209 - 5797.415: 0.3443% ( 2) 00:08:17.809 5797.415 - 5822.622: 0.3708% ( 2) 00:08:17.809 5822.622 - 5847.828: 0.3840% ( 1) 00:08:17.809 5847.828 - 5873.034: 0.4105% ( 2) 00:08:17.809 5873.034 - 5898.240: 0.4370% ( 2) 00:08:17.809 5898.240 - 5923.446: 0.4502% ( 1) 00:08:17.809 5923.446 - 5948.652: 0.4767% ( 2) 00:08:17.809 5948.652 - 5973.858: 0.5032% ( 2) 00:08:17.809 5973.858 - 5999.065: 0.5164% ( 1) 00:08:17.809 5999.065 - 6024.271: 0.5429% ( 2) 00:08:17.809 6024.271 - 6049.477: 0.5694% ( 2) 00:08:17.809 6049.477 - 6074.683: 0.5826% ( 1) 00:08:17.809 6074.683 - 6099.889: 0.6091% ( 2) 00:08:17.809 6099.889 - 6125.095: 0.6356% ( 2) 00:08:17.809 6125.095 - 6150.302: 0.6488% ( 1) 00:08:17.809 6150.302 - 6175.508: 0.6753% ( 2) 00:08:17.809 6175.508 - 6200.714: 0.7018% ( 2) 00:08:17.809 6200.714 - 6225.920: 0.7150% ( 1) 00:08:17.809 6225.920 - 6251.126: 0.7415% ( 2) 00:08:17.809 6251.126 - 6276.332: 0.7680% ( 2) 00:08:17.809 6276.332 - 6301.538: 0.7812% ( 1) 00:08:17.809 6301.538 - 6326.745: 0.8077% ( 2) 00:08:17.809 6326.745 - 6351.951: 0.8342% ( 2) 00:08:17.809 6351.951 - 6377.157: 0.8475% ( 1) 00:08:17.809 12300.603 - 12351.015: 0.8607% ( 1) 00:08:17.809 12351.015 - 12401.428: 0.9004% ( 3) 00:08:17.809 12401.428 - 12451.840: 0.9534% ( 4) 00:08:17.809 12451.840 - 12502.252: 1.0064% ( 4) 00:08:17.809 12502.252 - 12552.665: 1.0328% ( 2) 00:08:17.809 12552.665 - 12603.077: 1.0858% ( 4) 00:08:17.809 12603.077 - 12653.489: 1.1255% ( 3) 00:08:17.809 12653.489 - 12703.902: 1.1653% ( 3) 00:08:17.809 12703.902 - 12754.314: 1.2182% ( 4) 00:08:17.809 12754.314 - 12804.726: 1.2712% ( 4) 00:08:17.809 12804.726 - 12855.138: 1.3109% ( 3) 00:08:17.809 12855.138 - 12905.551: 1.3639% ( 4) 00:08:17.809 12905.551 - 13006.375: 1.4566% ( 7) 00:08:17.809 13006.375 - 13107.200: 1.5493% ( 7) 00:08:17.809 13107.200 - 13208.025: 1.6419% ( 7) 00:08:17.809 13208.025 - 13308.849: 1.7346% ( 7) 00:08:17.809 13308.849 - 13409.674: 1.8008% ( 5) 00:08:17.809 13409.674 - 13510.498: 1.8671% ( 5) 00:08:17.809 13510.498 - 13611.323: 1.9597% ( 7) 00:08:17.809 13611.323 - 13712.148: 2.1716% ( 16) 00:08:17.809 13712.148 - 13812.972: 2.3835% ( 16) 00:08:17.809 13812.972 - 13913.797: 2.8602% ( 36) 00:08:17.809 13913.797 - 14014.622: 3.3633% ( 38) 00:08:17.809 14014.622 - 14115.446: 4.0122% ( 49) 00:08:17.809 14115.446 - 14216.271: 5.0318% ( 77) 00:08:17.809 14216.271 - 14317.095: 6.0249% ( 75) 00:08:17.809 14317.095 - 14417.920: 7.2431% ( 92) 00:08:17.809 14417.920 - 14518.745: 8.5143% ( 96) 00:08:17.809 14518.745 - 14619.569: 9.7987% ( 97) 00:08:17.809 14619.569 - 14720.394: 11.2950% ( 113) 00:08:17.809 14720.394 - 14821.218: 13.0959% ( 136) 00:08:17.809 14821.218 - 14922.043: 15.1880% ( 158) 00:08:17.809 14922.043 - 15022.868: 17.3861% ( 166) 00:08:17.809 15022.868 - 15123.692: 19.4518% ( 156) 00:08:17.809 15123.692 - 15224.517: 21.7426% ( 173) 00:08:17.809 15224.517 - 15325.342: 23.9407% ( 166) 00:08:17.809 15325.342 - 15426.166: 25.8607% ( 145) 00:08:17.809 15426.166 - 15526.991: 27.9793% ( 160) 00:08:17.809 15526.991 - 15627.815: 29.9656% ( 150) 00:08:17.809 15627.815 - 15728.640: 31.9783% ( 152) 00:08:17.809 15728.640 - 15829.465: 34.0969% ( 160) 00:08:17.809 15829.465 - 15930.289: 36.5466% ( 185) 00:08:17.809 15930.289 - 16031.114: 38.5196% ( 149) 00:08:17.809 16031.114 - 16131.938: 40.8633% ( 177) 00:08:17.809 16131.938 - 16232.763: 43.3263% ( 186) 00:08:17.809 16232.763 - 16333.588: 45.8422% ( 190) 00:08:17.809 16333.588 - 16434.412: 48.2389% ( 181) 00:08:17.809 16434.412 - 16535.237: 50.3972% ( 163) 00:08:17.809 16535.237 - 16636.062: 52.7278% ( 176) 00:08:17.809 16636.062 - 16736.886: 54.8464% ( 160) 00:08:17.809 16736.886 - 16837.711: 56.8856% ( 154) 00:08:17.809 16837.711 - 16938.535: 59.2293% ( 177) 00:08:17.809 16938.535 - 17039.360: 61.3215% ( 158) 00:08:17.809 17039.360 - 17140.185: 63.4004% ( 157) 00:08:17.809 17140.185 - 17241.009: 65.3999% ( 151) 00:08:17.809 17241.009 - 17341.834: 67.2272% ( 138) 00:08:17.809 17341.834 - 17442.658: 69.1472% ( 145) 00:08:17.809 17442.658 - 17543.483: 70.8157% ( 126) 00:08:17.809 17543.483 - 17644.308: 72.0604% ( 94) 00:08:17.809 17644.308 - 17745.132: 73.3845% ( 100) 00:08:17.809 17745.132 - 17845.957: 75.0000% ( 122) 00:08:17.809 17845.957 - 17946.782: 76.4566% ( 110) 00:08:17.809 17946.782 - 18047.606: 77.9396% ( 112) 00:08:17.809 18047.606 - 18148.431: 79.3432% ( 106) 00:08:17.809 18148.431 - 18249.255: 80.6939% ( 102) 00:08:17.809 18249.255 - 18350.080: 81.8724% ( 89) 00:08:17.809 18350.080 - 18450.905: 82.9846% ( 84) 00:08:17.809 18450.905 - 18551.729: 84.0175% ( 78) 00:08:17.809 18551.729 - 18652.554: 84.9576% ( 71) 00:08:17.809 18652.554 - 18753.378: 86.0434% ( 82) 00:08:17.809 18753.378 - 18854.203: 86.9174% ( 66) 00:08:17.809 18854.203 - 18955.028: 87.7648% ( 64) 00:08:17.809 18955.028 - 19055.852: 88.4799% ( 54) 00:08:17.809 19055.852 - 19156.677: 89.2082% ( 55) 00:08:17.809 19156.677 - 19257.502: 89.9762% ( 58) 00:08:17.809 19257.502 - 19358.326: 90.7044% ( 55) 00:08:17.809 19358.326 - 19459.151: 91.3665% ( 50) 00:08:17.809 19459.151 - 19559.975: 92.0551% ( 52) 00:08:17.809 19559.975 - 19660.800: 92.5847% ( 40) 00:08:17.809 19660.800 - 19761.625: 92.9820% ( 30) 00:08:17.809 19761.625 - 19862.449: 93.2998% ( 24) 00:08:17.809 19862.449 - 19963.274: 93.5381% ( 18) 00:08:17.809 19963.274 - 20064.098: 93.7765% ( 18) 00:08:17.809 20064.098 - 20164.923: 94.0943% ( 24) 00:08:17.809 20164.923 - 20265.748: 94.3856% ( 22) 00:08:17.809 20265.748 - 20366.572: 94.6901% ( 23) 00:08:17.809 20366.572 - 20467.397: 95.1404% ( 34) 00:08:17.809 20467.397 - 20568.222: 95.5641% ( 32) 00:08:17.809 20568.222 - 20669.046: 95.8289% ( 20) 00:08:17.809 20669.046 - 20769.871: 96.0938% ( 20) 00:08:17.809 20769.871 - 20870.695: 96.3453% ( 19) 00:08:17.809 20870.695 - 20971.520: 96.5307% ( 14) 00:08:17.809 20971.520 - 21072.345: 96.7161% ( 14) 00:08:17.809 21072.345 - 21173.169: 96.8882% ( 13) 00:08:17.809 21173.169 - 21273.994: 97.0869% ( 15) 00:08:17.809 21273.994 - 21374.818: 97.2458% ( 12) 00:08:17.809 21374.818 - 21475.643: 97.3252% ( 6) 00:08:17.809 21475.643 - 21576.468: 97.3914% ( 5) 00:08:17.809 21576.468 - 21677.292: 97.4576% ( 5) 00:08:17.809 22181.415 - 22282.240: 97.4974% ( 3) 00:08:17.810 22282.240 - 22383.065: 97.5636% ( 5) 00:08:17.810 22383.065 - 22483.889: 97.6298% ( 5) 00:08:17.810 22483.889 - 22584.714: 97.6960% ( 5) 00:08:17.810 22584.714 - 22685.538: 97.7754% ( 6) 00:08:17.810 22685.538 - 22786.363: 97.8549% ( 6) 00:08:17.810 22786.363 - 22887.188: 97.9078% ( 4) 00:08:17.810 22887.188 - 22988.012: 97.9608% ( 4) 00:08:17.810 22988.012 - 23088.837: 98.0270% ( 5) 00:08:17.810 23088.837 - 23189.662: 98.0667% ( 3) 00:08:17.810 23189.662 - 23290.486: 98.1329% ( 5) 00:08:17.810 23290.486 - 23391.311: 98.1992% ( 5) 00:08:17.810 23391.311 - 23492.135: 98.2521% ( 4) 00:08:17.810 23492.135 - 23592.960: 98.3051% ( 4) 00:08:17.810 28029.243 - 28230.892: 98.3316% ( 2) 00:08:17.810 28230.892 - 28432.542: 98.4507% ( 9) 00:08:17.810 28432.542 - 28634.191: 98.5699% ( 9) 00:08:17.810 28634.191 - 28835.840: 98.6891% ( 9) 00:08:17.810 28835.840 - 29037.489: 98.7950% ( 8) 00:08:17.810 29037.489 - 29239.138: 98.8745% ( 6) 00:08:17.810 29239.138 - 29440.788: 99.0069% ( 10) 00:08:17.810 29440.788 - 29642.437: 99.1393% ( 10) 00:08:17.810 29642.437 - 29844.086: 99.1525% ( 1) 00:08:17.810 39119.951 - 39321.600: 99.2320% ( 6) 00:08:17.810 39321.600 - 39523.249: 99.3512% ( 9) 00:08:17.810 39523.249 - 39724.898: 99.4571% ( 8) 00:08:17.810 39724.898 - 39926.548: 99.5101% ( 4) 00:08:17.810 39926.548 - 40128.197: 99.6028% ( 7) 00:08:17.810 40128.197 - 40329.846: 99.7087% ( 8) 00:08:17.810 40329.846 - 40531.495: 99.8146% ( 8) 00:08:17.810 40531.495 - 40733.145: 99.9206% ( 8) 00:08:17.810 40733.145 - 40934.794: 100.0000% ( 6) 00:08:17.810 00:08:17.810 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:17.810 ============================================================================== 00:08:17.810 Range in us Cumulative IO count 00:08:17.810 4738.757 - 4763.963: 0.0131% ( 1) 00:08:17.810 4763.963 - 4789.169: 0.0394% ( 2) 00:08:17.810 4789.169 - 4814.375: 0.0657% ( 2) 00:08:17.810 4814.375 - 4839.582: 0.0788% ( 1) 00:08:17.810 4839.582 - 4864.788: 0.1050% ( 2) 00:08:17.810 4864.788 - 4889.994: 0.1182% ( 1) 00:08:17.810 4889.994 - 4915.200: 0.1444% ( 2) 00:08:17.810 4915.200 - 4940.406: 0.1838% ( 3) 00:08:17.810 4940.406 - 4965.612: 0.2232% ( 3) 00:08:17.810 4965.612 - 4990.818: 0.2626% ( 3) 00:08:17.810 4990.818 - 5016.025: 0.3151% ( 4) 00:08:17.810 5016.025 - 5041.231: 0.3545% ( 3) 00:08:17.810 5041.231 - 5066.437: 0.3808% ( 2) 00:08:17.810 5066.437 - 5091.643: 0.3939% ( 1) 00:08:17.810 5091.643 - 5116.849: 0.4202% ( 2) 00:08:17.810 5116.849 - 5142.055: 0.4464% ( 2) 00:08:17.810 5142.055 - 5167.262: 0.4596% ( 1) 00:08:17.810 5167.262 - 5192.468: 0.4858% ( 2) 00:08:17.810 5192.468 - 5217.674: 0.5121% ( 2) 00:08:17.810 5217.674 - 5242.880: 0.5252% ( 1) 00:08:17.810 5242.880 - 5268.086: 0.5515% ( 2) 00:08:17.810 5268.086 - 5293.292: 0.5646% ( 1) 00:08:17.810 5293.292 - 5318.498: 0.5909% ( 2) 00:08:17.810 5318.498 - 5343.705: 0.6171% ( 2) 00:08:17.810 5343.705 - 5368.911: 0.6303% ( 1) 00:08:17.810 5368.911 - 5394.117: 0.6565% ( 2) 00:08:17.810 5394.117 - 5419.323: 0.6828% ( 2) 00:08:17.810 5419.323 - 5444.529: 0.6959% ( 1) 00:08:17.810 5444.529 - 5469.735: 0.7222% ( 2) 00:08:17.810 5469.735 - 5494.942: 0.7484% ( 2) 00:08:17.810 5494.942 - 5520.148: 0.7616% ( 1) 00:08:17.810 5520.148 - 5545.354: 0.7878% ( 2) 00:08:17.810 5545.354 - 5570.560: 0.8141% ( 2) 00:08:17.810 5570.560 - 5595.766: 0.8272% ( 1) 00:08:17.810 5595.766 - 5620.972: 0.8403% ( 1) 00:08:17.810 11141.120 - 11191.532: 0.8535% ( 1) 00:08:17.810 11191.532 - 11241.945: 0.8929% ( 3) 00:08:17.810 11241.945 - 11292.357: 0.9454% ( 4) 00:08:17.810 11292.357 - 11342.769: 0.9848% ( 3) 00:08:17.810 11342.769 - 11393.182: 1.0504% ( 5) 00:08:17.810 11393.182 - 11443.594: 1.1029% ( 4) 00:08:17.810 11443.594 - 11494.006: 1.1423% ( 3) 00:08:17.810 11494.006 - 11544.418: 1.1949% ( 4) 00:08:17.810 11544.418 - 11594.831: 1.2474% ( 4) 00:08:17.810 11594.831 - 11645.243: 1.2999% ( 4) 00:08:17.810 11645.243 - 11695.655: 1.3393% ( 3) 00:08:17.810 11695.655 - 11746.068: 1.3787% ( 3) 00:08:17.810 11746.068 - 11796.480: 1.4312% ( 4) 00:08:17.810 11796.480 - 11846.892: 1.4837% ( 4) 00:08:17.810 11846.892 - 11897.305: 1.5362% ( 4) 00:08:17.810 11897.305 - 11947.717: 1.5756% ( 3) 00:08:17.810 11947.717 - 11998.129: 1.6150% ( 3) 00:08:17.810 11998.129 - 12048.542: 1.6413% ( 2) 00:08:17.810 12048.542 - 12098.954: 1.6807% ( 3) 00:08:17.810 13409.674 - 13510.498: 1.7069% ( 2) 00:08:17.810 13510.498 - 13611.323: 1.7726% ( 5) 00:08:17.810 13611.323 - 13712.148: 1.9039% ( 10) 00:08:17.810 13712.148 - 13812.972: 2.0746% ( 13) 00:08:17.810 13812.972 - 13913.797: 2.3503% ( 21) 00:08:17.810 13913.797 - 14014.622: 2.9674% ( 47) 00:08:17.810 14014.622 - 14115.446: 3.9259% ( 73) 00:08:17.810 14115.446 - 14216.271: 4.7006% ( 59) 00:08:17.810 14216.271 - 14317.095: 5.8955% ( 91) 00:08:17.810 14317.095 - 14417.920: 7.1691% ( 97) 00:08:17.810 14417.920 - 14518.745: 8.4953% ( 101) 00:08:17.810 14518.745 - 14619.569: 9.9790% ( 113) 00:08:17.810 14619.569 - 14720.394: 11.7122% ( 132) 00:08:17.810 14720.394 - 14821.218: 13.6161% ( 145) 00:08:17.810 14821.218 - 14922.043: 15.4543% ( 140) 00:08:17.810 14922.043 - 15022.868: 17.5814% ( 162) 00:08:17.810 15022.868 - 15123.692: 19.7085% ( 162) 00:08:17.810 15123.692 - 15224.517: 22.0457% ( 178) 00:08:17.810 15224.517 - 15325.342: 24.1991% ( 164) 00:08:17.810 15325.342 - 15426.166: 26.6019% ( 183) 00:08:17.810 15426.166 - 15526.991: 28.9653% ( 180) 00:08:17.810 15526.991 - 15627.815: 31.0399% ( 158) 00:08:17.810 15627.815 - 15728.640: 33.1145% ( 158) 00:08:17.810 15728.640 - 15829.465: 35.3598% ( 171) 00:08:17.810 15829.465 - 15930.289: 37.4475% ( 159) 00:08:17.810 15930.289 - 16031.114: 39.5877% ( 163) 00:08:17.810 16031.114 - 16131.938: 41.6492% ( 157) 00:08:17.810 16131.938 - 16232.763: 44.0914% ( 186) 00:08:17.811 16232.763 - 16333.588: 46.3629% ( 173) 00:08:17.811 16333.588 - 16434.412: 48.6738% ( 176) 00:08:17.811 16434.412 - 16535.237: 50.7484% ( 158) 00:08:17.811 16535.237 - 16636.062: 52.8887% ( 163) 00:08:17.811 16636.062 - 16736.886: 54.8188% ( 147) 00:08:17.811 16736.886 - 16837.711: 56.7096% ( 144) 00:08:17.811 16837.711 - 16938.535: 58.7841% ( 158) 00:08:17.811 16938.535 - 17039.360: 60.8718% ( 159) 00:08:17.811 17039.360 - 17140.185: 63.0121% ( 163) 00:08:17.811 17140.185 - 17241.009: 65.1654% ( 164) 00:08:17.811 17241.009 - 17341.834: 67.0693% ( 145) 00:08:17.811 17341.834 - 17442.658: 68.9338% ( 142) 00:08:17.811 17442.658 - 17543.483: 70.5882% ( 126) 00:08:17.811 17543.483 - 17644.308: 72.1901% ( 122) 00:08:17.811 17644.308 - 17745.132: 73.5163% ( 101) 00:08:17.811 17745.132 - 17845.957: 74.9081% ( 106) 00:08:17.811 17845.957 - 17946.782: 76.7201% ( 138) 00:08:17.811 17946.782 - 18047.606: 78.0331% ( 100) 00:08:17.811 18047.606 - 18148.431: 79.2673% ( 94) 00:08:17.811 18148.431 - 18249.255: 80.3309% ( 81) 00:08:17.811 18249.255 - 18350.080: 81.3550% ( 78) 00:08:17.811 18350.080 - 18450.905: 82.4974% ( 87) 00:08:17.811 18450.905 - 18551.729: 83.5478% ( 80) 00:08:17.811 18551.729 - 18652.554: 84.5063% ( 73) 00:08:17.811 18652.554 - 18753.378: 85.4911% ( 75) 00:08:17.811 18753.378 - 18854.203: 86.3839% ( 68) 00:08:17.811 18854.203 - 18955.028: 87.2505% ( 66) 00:08:17.811 18955.028 - 19055.852: 87.9727% ( 55) 00:08:17.811 19055.852 - 19156.677: 88.8524% ( 67) 00:08:17.811 19156.677 - 19257.502: 89.6402% ( 60) 00:08:17.811 19257.502 - 19358.326: 90.2967% ( 50) 00:08:17.811 19358.326 - 19459.151: 90.9007% ( 46) 00:08:17.811 19459.151 - 19559.975: 91.5572% ( 50) 00:08:17.811 19559.975 - 19660.800: 92.2794% ( 55) 00:08:17.811 19660.800 - 19761.625: 92.8309% ( 42) 00:08:17.811 19761.625 - 19862.449: 93.3561% ( 40) 00:08:17.811 19862.449 - 19963.274: 93.8550% ( 38) 00:08:17.811 19963.274 - 20064.098: 94.3409% ( 37) 00:08:17.811 20064.098 - 20164.923: 94.7348% ( 30) 00:08:17.811 20164.923 - 20265.748: 95.1155% ( 29) 00:08:17.811 20265.748 - 20366.572: 95.2600% ( 11) 00:08:17.811 20366.572 - 20467.397: 95.4569% ( 15) 00:08:17.811 20467.397 - 20568.222: 95.5882% ( 10) 00:08:17.811 20568.222 - 20669.046: 95.6933% ( 8) 00:08:17.811 20669.046 - 20769.871: 95.8246% ( 10) 00:08:17.811 20769.871 - 20870.695: 95.9821% ( 12) 00:08:17.811 20870.695 - 20971.520: 96.2447% ( 20) 00:08:17.811 20971.520 - 21072.345: 96.4286% ( 14) 00:08:17.811 21072.345 - 21173.169: 96.6255% ( 15) 00:08:17.811 21173.169 - 21273.994: 96.7568% ( 10) 00:08:17.811 21273.994 - 21374.818: 96.8881% ( 10) 00:08:17.811 21374.818 - 21475.643: 97.0326% ( 11) 00:08:17.811 21475.643 - 21576.468: 97.2558% ( 17) 00:08:17.811 21576.468 - 21677.292: 97.4396% ( 14) 00:08:17.811 21677.292 - 21778.117: 97.6497% ( 16) 00:08:17.811 21778.117 - 21878.942: 97.8335% ( 14) 00:08:17.811 21878.942 - 21979.766: 97.9517% ( 9) 00:08:17.811 21979.766 - 22080.591: 98.0699% ( 9) 00:08:17.811 22080.591 - 22181.415: 98.1880% ( 9) 00:08:17.811 22181.415 - 22282.240: 98.3193% ( 10) 00:08:17.811 22282.240 - 22383.065: 98.4375% ( 9) 00:08:17.811 22383.065 - 22483.889: 98.5163% ( 6) 00:08:17.811 22483.889 - 22584.714: 98.5951% ( 6) 00:08:17.811 22584.714 - 22685.538: 98.6607% ( 5) 00:08:17.811 22685.538 - 22786.363: 98.7132% ( 4) 00:08:17.811 22786.363 - 22887.188: 98.7789% ( 5) 00:08:17.811 22887.188 - 22988.012: 98.8445% ( 5) 00:08:17.811 22988.012 - 23088.837: 98.9102% ( 5) 00:08:17.811 23088.837 - 23189.662: 98.9758% ( 5) 00:08:17.811 23189.662 - 23290.486: 99.0415% ( 5) 00:08:17.811 23290.486 - 23391.311: 99.0940% ( 4) 00:08:17.811 23391.311 - 23492.135: 99.1465% ( 4) 00:08:17.811 23492.135 - 23592.960: 99.1597% ( 1) 00:08:17.811 28835.840 - 29037.489: 99.2516% ( 7) 00:08:17.811 29037.489 - 29239.138: 99.3697% ( 9) 00:08:17.811 29239.138 - 29440.788: 99.4748% ( 8) 00:08:17.811 29440.788 - 29642.437: 99.6061% ( 10) 00:08:17.811 29642.437 - 29844.086: 99.7111% ( 8) 00:08:17.811 29844.086 - 30045.735: 99.8293% ( 9) 00:08:17.811 30045.735 - 30247.385: 99.9606% ( 10) 00:08:17.811 30247.385 - 30449.034: 100.0000% ( 3) 00:08:17.811 00:08:17.811 22:52:56 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:19.199 Initializing NVMe Controllers 00:08:19.199 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.199 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:19.199 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.199 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.199 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:19.199 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:19.199 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:19.199 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:19.199 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:19.199 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:19.199 Initialization complete. Launching workers. 00:08:19.199 ======================================================== 00:08:19.199 Latency(us) 00:08:19.199 Device Information : IOPS MiB/s Average min max 00:08:19.199 PCIE (0000:00:10.0) NSID 1 from core 0: 7948.83 93.15 16123.22 12005.58 39340.08 00:08:19.199 PCIE (0000:00:11.0) NSID 1 from core 0: 7948.83 93.15 16108.19 11396.23 38506.68 00:08:19.199 PCIE (0000:00:13.0) NSID 1 from core 0: 7948.83 93.15 16088.50 9584.21 39595.78 00:08:19.199 PCIE (0000:00:12.0) NSID 1 from core 0: 7948.83 93.15 16068.13 8605.75 39186.25 00:08:19.199 PCIE (0000:00:12.0) NSID 2 from core 0: 7948.83 93.15 16047.69 7614.16 38521.46 00:08:19.199 PCIE (0000:00:12.0) NSID 3 from core 0: 8012.42 93.90 15900.02 7118.61 29148.64 00:08:19.199 ======================================================== 00:08:19.199 Total : 47756.59 559.65 16055.75 7118.61 39595.78 00:08:19.199 00:08:19.199 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:19.199 ================================================================================= 00:08:19.199 1.00000% : 13107.200us 00:08:19.199 10.00000% : 14115.446us 00:08:19.199 25.00000% : 14720.394us 00:08:19.199 50.00000% : 15526.991us 00:08:19.199 75.00000% : 16938.535us 00:08:19.199 90.00000% : 18350.080us 00:08:19.199 95.00000% : 19459.151us 00:08:19.199 98.00000% : 21374.818us 00:08:19.199 99.00000% : 29844.086us 00:08:19.199 99.50000% : 38313.354us 00:08:19.199 99.90000% : 39119.951us 00:08:19.199 99.99000% : 39523.249us 00:08:19.199 99.99900% : 39523.249us 00:08:19.199 99.99990% : 39523.249us 00:08:19.199 99.99999% : 39523.249us 00:08:19.199 00:08:19.199 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:19.199 ================================================================================= 00:08:19.199 1.00000% : 12653.489us 00:08:19.199 10.00000% : 14216.271us 00:08:19.199 25.00000% : 14720.394us 00:08:19.199 50.00000% : 15426.166us 00:08:19.199 75.00000% : 17039.360us 00:08:19.199 90.00000% : 18350.080us 00:08:19.199 95.00000% : 19156.677us 00:08:19.199 98.00000% : 21677.292us 00:08:19.199 99.00000% : 29239.138us 00:08:19.199 99.50000% : 37708.406us 00:08:19.199 99.90000% : 38515.003us 00:08:19.199 99.99000% : 38515.003us 00:08:19.199 99.99900% : 38515.003us 00:08:19.199 99.99990% : 38515.003us 00:08:19.199 99.99999% : 38515.003us 00:08:19.199 00:08:19.199 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:19.199 ================================================================================= 00:08:19.199 1.00000% : 12855.138us 00:08:19.199 10.00000% : 14115.446us 00:08:19.199 25.00000% : 14821.218us 00:08:19.199 50.00000% : 15526.991us 00:08:19.199 75.00000% : 16837.711us 00:08:19.199 90.00000% : 18249.255us 00:08:19.199 95.00000% : 19459.151us 00:08:19.199 98.00000% : 22181.415us 00:08:19.199 99.00000% : 29642.437us 00:08:19.199 99.50000% : 38716.652us 00:08:19.199 99.90000% : 39523.249us 00:08:19.199 99.99000% : 39724.898us 00:08:19.199 99.99900% : 39724.898us 00:08:19.199 99.99990% : 39724.898us 00:08:19.199 99.99999% : 39724.898us 00:08:19.199 00:08:19.199 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:19.199 ================================================================================= 00:08:19.199 1.00000% : 13006.375us 00:08:19.199 10.00000% : 14317.095us 00:08:19.199 25.00000% : 14720.394us 00:08:19.199 50.00000% : 15526.991us 00:08:19.199 75.00000% : 16736.886us 00:08:19.199 90.00000% : 18249.255us 00:08:19.199 95.00000% : 19459.151us 00:08:19.199 98.00000% : 21677.292us 00:08:19.199 99.00000% : 28835.840us 00:08:19.199 99.50000% : 38313.354us 00:08:19.199 99.90000% : 39119.951us 00:08:19.199 99.99000% : 39321.600us 00:08:19.199 99.99900% : 39321.600us 00:08:19.199 99.99990% : 39321.600us 00:08:19.199 99.99999% : 39321.600us 00:08:19.199 00:08:19.199 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:19.199 ================================================================================= 00:08:19.199 1.00000% : 12905.551us 00:08:19.199 10.00000% : 14216.271us 00:08:19.199 25.00000% : 14720.394us 00:08:19.199 50.00000% : 15426.166us 00:08:19.199 75.00000% : 16938.535us 00:08:19.199 90.00000% : 18350.080us 00:08:19.199 95.00000% : 19257.502us 00:08:19.199 98.00000% : 21273.994us 00:08:19.199 99.00000% : 28432.542us 00:08:19.199 99.50000% : 37708.406us 00:08:19.199 99.90000% : 38515.003us 00:08:19.199 99.99000% : 38716.652us 00:08:19.199 99.99900% : 38716.652us 00:08:19.199 99.99990% : 38716.652us 00:08:19.199 99.99999% : 38716.652us 00:08:19.199 00:08:19.199 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:19.199 ================================================================================= 00:08:19.199 1.00000% : 12754.314us 00:08:19.199 10.00000% : 14216.271us 00:08:19.199 25.00000% : 14720.394us 00:08:19.199 50.00000% : 15526.991us 00:08:19.199 75.00000% : 17039.360us 00:08:19.199 90.00000% : 18450.905us 00:08:19.199 95.00000% : 19459.151us 00:08:19.199 98.00000% : 20265.748us 00:08:19.199 99.00000% : 20769.871us 00:08:19.199 99.50000% : 28230.892us 00:08:19.199 99.90000% : 29037.489us 00:08:19.199 99.99000% : 29239.138us 00:08:19.199 99.99900% : 29239.138us 00:08:19.199 99.99990% : 29239.138us 00:08:19.199 99.99999% : 29239.138us 00:08:19.199 00:08:19.199 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:19.199 ============================================================================== 00:08:19.199 Range in us Cumulative IO count 00:08:19.199 11998.129 - 12048.542: 0.0125% ( 1) 00:08:19.199 12098.954 - 12149.366: 0.0375% ( 2) 00:08:19.199 12250.191 - 12300.603: 0.1875% ( 12) 00:08:19.199 12300.603 - 12351.015: 0.3500% ( 13) 00:08:19.199 12351.015 - 12401.428: 0.4625% ( 9) 00:08:19.199 12401.428 - 12451.840: 0.4875% ( 2) 00:08:19.199 12502.252 - 12552.665: 0.5000% ( 1) 00:08:19.199 12552.665 - 12603.077: 0.5375% ( 3) 00:08:19.199 12603.077 - 12653.489: 0.5500% ( 1) 00:08:19.199 12653.489 - 12703.902: 0.5750% ( 2) 00:08:19.199 12703.902 - 12754.314: 0.6000% ( 2) 00:08:19.199 12754.314 - 12804.726: 0.6500% ( 4) 00:08:19.199 12804.726 - 12855.138: 0.7000% ( 4) 00:08:19.199 12855.138 - 12905.551: 0.8125% ( 9) 00:08:19.199 12905.551 - 13006.375: 0.9875% ( 14) 00:08:19.199 13006.375 - 13107.200: 1.1625% ( 14) 00:08:19.199 13107.200 - 13208.025: 1.5750% ( 33) 00:08:19.199 13208.025 - 13308.849: 1.9125% ( 27) 00:08:19.199 13308.849 - 13409.674: 2.2250% ( 25) 00:08:19.199 13409.674 - 13510.498: 2.6500% ( 34) 00:08:19.199 13510.498 - 13611.323: 3.1500% ( 40) 00:08:19.199 13611.323 - 13712.148: 4.2000% ( 84) 00:08:19.199 13712.148 - 13812.972: 5.2500% ( 84) 00:08:19.199 13812.972 - 13913.797: 7.0625% ( 145) 00:08:19.199 13913.797 - 14014.622: 8.9375% ( 150) 00:08:19.199 14014.622 - 14115.446: 11.1250% ( 175) 00:08:19.199 14115.446 - 14216.271: 14.0500% ( 234) 00:08:19.199 14216.271 - 14317.095: 16.8125% ( 221) 00:08:19.199 14317.095 - 14417.920: 19.1625% ( 188) 00:08:19.199 14417.920 - 14518.745: 21.9875% ( 226) 00:08:19.199 14518.745 - 14619.569: 24.7625% ( 222) 00:08:19.199 14619.569 - 14720.394: 27.3875% ( 210) 00:08:19.199 14720.394 - 14821.218: 29.8625% ( 198) 00:08:19.199 14821.218 - 14922.043: 33.4000% ( 283) 00:08:19.199 14922.043 - 15022.868: 36.6125% ( 257) 00:08:19.199 15022.868 - 15123.692: 39.5625% ( 236) 00:08:19.199 15123.692 - 15224.517: 42.7625% ( 256) 00:08:19.199 15224.517 - 15325.342: 45.4125% ( 212) 00:08:19.199 15325.342 - 15426.166: 48.1000% ( 215) 00:08:19.199 15426.166 - 15526.991: 50.1875% ( 167) 00:08:19.199 15526.991 - 15627.815: 52.5500% ( 189) 00:08:19.199 15627.815 - 15728.640: 54.5625% ( 161) 00:08:19.199 15728.640 - 15829.465: 56.8750% ( 185) 00:08:19.199 15829.465 - 15930.289: 59.2250% ( 188) 00:08:19.199 15930.289 - 16031.114: 61.0000% ( 142) 00:08:19.199 16031.114 - 16131.938: 62.3750% ( 110) 00:08:19.200 16131.938 - 16232.763: 63.9125% ( 123) 00:08:19.200 16232.763 - 16333.588: 65.5625% ( 132) 00:08:19.200 16333.588 - 16434.412: 67.0875% ( 122) 00:08:19.200 16434.412 - 16535.237: 68.6625% ( 126) 00:08:19.200 16535.237 - 16636.062: 69.9750% ( 105) 00:08:19.200 16636.062 - 16736.886: 71.3750% ( 112) 00:08:19.200 16736.886 - 16837.711: 73.1500% ( 142) 00:08:19.200 16837.711 - 16938.535: 75.3125% ( 173) 00:08:19.200 16938.535 - 17039.360: 76.9625% ( 132) 00:08:19.200 17039.360 - 17140.185: 78.6625% ( 136) 00:08:19.200 17140.185 - 17241.009: 80.3500% ( 135) 00:08:19.200 17241.009 - 17341.834: 81.7250% ( 110) 00:08:19.200 17341.834 - 17442.658: 83.0125% ( 103) 00:08:19.200 17442.658 - 17543.483: 84.4125% ( 112) 00:08:19.200 17543.483 - 17644.308: 85.2250% ( 65) 00:08:19.200 17644.308 - 17745.132: 86.1000% ( 70) 00:08:19.200 17745.132 - 17845.957: 86.6750% ( 46) 00:08:19.200 17845.957 - 17946.782: 87.2500% ( 46) 00:08:19.200 17946.782 - 18047.606: 87.9500% ( 56) 00:08:19.200 18047.606 - 18148.431: 88.6625% ( 57) 00:08:19.200 18148.431 - 18249.255: 89.4500% ( 63) 00:08:19.200 18249.255 - 18350.080: 90.2000% ( 60) 00:08:19.200 18350.080 - 18450.905: 90.7500% ( 44) 00:08:19.200 18450.905 - 18551.729: 91.2000% ( 36) 00:08:19.200 18551.729 - 18652.554: 91.5875% ( 31) 00:08:19.200 18652.554 - 18753.378: 92.0500% ( 37) 00:08:19.200 18753.378 - 18854.203: 92.6625% ( 49) 00:08:19.200 18854.203 - 18955.028: 93.0875% ( 34) 00:08:19.200 18955.028 - 19055.852: 93.5875% ( 40) 00:08:19.200 19055.852 - 19156.677: 94.0625% ( 38) 00:08:19.200 19156.677 - 19257.502: 94.2750% ( 17) 00:08:19.200 19257.502 - 19358.326: 94.6750% ( 32) 00:08:19.200 19358.326 - 19459.151: 95.0250% ( 28) 00:08:19.200 19459.151 - 19559.975: 95.2125% ( 15) 00:08:19.200 19559.975 - 19660.800: 95.4375% ( 18) 00:08:19.200 19660.800 - 19761.625: 95.6625% ( 18) 00:08:19.200 19761.625 - 19862.449: 96.0000% ( 27) 00:08:19.200 19862.449 - 19963.274: 96.2750% ( 22) 00:08:19.200 19963.274 - 20064.098: 96.4875% ( 17) 00:08:19.200 20064.098 - 20164.923: 96.5750% ( 7) 00:08:19.200 20164.923 - 20265.748: 96.6750% ( 8) 00:08:19.200 20265.748 - 20366.572: 96.7000% ( 2) 00:08:19.200 20366.572 - 20467.397: 96.7875% ( 7) 00:08:19.200 20467.397 - 20568.222: 96.8125% ( 2) 00:08:19.200 20568.222 - 20669.046: 97.0750% ( 21) 00:08:19.200 20669.046 - 20769.871: 97.4500% ( 30) 00:08:19.200 20769.871 - 20870.695: 97.6750% ( 18) 00:08:19.200 20870.695 - 20971.520: 97.7875% ( 9) 00:08:19.200 20971.520 - 21072.345: 97.8500% ( 5) 00:08:19.200 21072.345 - 21173.169: 97.8625% ( 1) 00:08:19.200 21173.169 - 21273.994: 97.9750% ( 9) 00:08:19.200 21273.994 - 21374.818: 98.0250% ( 4) 00:08:19.200 21374.818 - 21475.643: 98.2375% ( 17) 00:08:19.200 21475.643 - 21576.468: 98.3000% ( 5) 00:08:19.200 21576.468 - 21677.292: 98.3250% ( 2) 00:08:19.200 21677.292 - 21778.117: 98.3875% ( 5) 00:08:19.200 21778.117 - 21878.942: 98.4000% ( 1) 00:08:19.200 28432.542 - 28634.191: 98.4250% ( 2) 00:08:19.200 28634.191 - 28835.840: 98.5750% ( 12) 00:08:19.200 28835.840 - 29037.489: 98.7625% ( 15) 00:08:19.200 29037.489 - 29239.138: 98.8750% ( 9) 00:08:19.200 29239.138 - 29440.788: 98.9000% ( 2) 00:08:19.200 29440.788 - 29642.437: 98.9875% ( 7) 00:08:19.200 29642.437 - 29844.086: 99.0750% ( 7) 00:08:19.200 29844.086 - 30045.735: 99.1625% ( 7) 00:08:19.200 30045.735 - 30247.385: 99.2000% ( 3) 00:08:19.200 37305.108 - 37506.757: 99.2125% ( 1) 00:08:19.200 37506.757 - 37708.406: 99.2375% ( 2) 00:08:19.200 37708.406 - 37910.055: 99.3250% ( 7) 00:08:19.200 37910.055 - 38111.705: 99.4500% ( 10) 00:08:19.200 38111.705 - 38313.354: 99.5500% ( 8) 00:08:19.200 38313.354 - 38515.003: 99.6000% ( 4) 00:08:19.200 38515.003 - 38716.652: 99.7000% ( 8) 00:08:19.200 38716.652 - 38918.302: 99.8000% ( 8) 00:08:19.200 38918.302 - 39119.951: 99.9000% ( 8) 00:08:19.200 39119.951 - 39321.600: 99.9875% ( 7) 00:08:19.200 39321.600 - 39523.249: 100.0000% ( 1) 00:08:19.200 00:08:19.200 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:19.200 ============================================================================== 00:08:19.200 Range in us Cumulative IO count 00:08:19.200 11393.182 - 11443.594: 0.0125% ( 1) 00:08:19.200 11544.418 - 11594.831: 0.0250% ( 1) 00:08:19.200 11594.831 - 11645.243: 0.0625% ( 3) 00:08:19.200 11645.243 - 11695.655: 0.1000% ( 3) 00:08:19.200 11695.655 - 11746.068: 0.1750% ( 6) 00:08:19.200 11746.068 - 11796.480: 0.2375% ( 5) 00:08:19.200 11796.480 - 11846.892: 0.3375% ( 8) 00:08:19.200 11846.892 - 11897.305: 0.4625% ( 10) 00:08:19.200 11897.305 - 11947.717: 0.5500% ( 7) 00:08:19.200 11947.717 - 11998.129: 0.6250% ( 6) 00:08:19.200 11998.129 - 12048.542: 0.7000% ( 6) 00:08:19.200 12048.542 - 12098.954: 0.7375% ( 3) 00:08:19.200 12098.954 - 12149.366: 0.7750% ( 3) 00:08:19.200 12149.366 - 12199.778: 0.8000% ( 2) 00:08:19.200 12199.778 - 12250.191: 0.8125% ( 1) 00:08:19.200 12401.428 - 12451.840: 0.8250% ( 1) 00:08:19.200 12451.840 - 12502.252: 0.8625% ( 3) 00:08:19.200 12502.252 - 12552.665: 0.9000% ( 3) 00:08:19.200 12552.665 - 12603.077: 0.9375% ( 3) 00:08:19.200 12603.077 - 12653.489: 1.0250% ( 7) 00:08:19.200 12653.489 - 12703.902: 1.0625% ( 3) 00:08:19.200 12703.902 - 12754.314: 1.1625% ( 8) 00:08:19.200 12754.314 - 12804.726: 1.2625% ( 8) 00:08:19.200 12804.726 - 12855.138: 1.3125% ( 4) 00:08:19.200 12855.138 - 12905.551: 1.3625% ( 4) 00:08:19.200 12905.551 - 13006.375: 1.4625% ( 8) 00:08:19.200 13006.375 - 13107.200: 1.5375% ( 6) 00:08:19.200 13107.200 - 13208.025: 1.6625% ( 10) 00:08:19.200 13208.025 - 13308.849: 1.8125% ( 12) 00:08:19.200 13308.849 - 13409.674: 2.1000% ( 23) 00:08:19.200 13409.674 - 13510.498: 2.4875% ( 31) 00:08:19.200 13510.498 - 13611.323: 2.8750% ( 31) 00:08:19.200 13611.323 - 13712.148: 3.4125% ( 43) 00:08:19.200 13712.148 - 13812.972: 3.9750% ( 45) 00:08:19.200 13812.972 - 13913.797: 4.9750% ( 80) 00:08:19.200 13913.797 - 14014.622: 6.4250% ( 116) 00:08:19.200 14014.622 - 14115.446: 8.4750% ( 164) 00:08:19.200 14115.446 - 14216.271: 10.7875% ( 185) 00:08:19.200 14216.271 - 14317.095: 13.8875% ( 248) 00:08:19.200 14317.095 - 14417.920: 17.4000% ( 281) 00:08:19.200 14417.920 - 14518.745: 20.2500% ( 228) 00:08:19.200 14518.745 - 14619.569: 23.1750% ( 234) 00:08:19.200 14619.569 - 14720.394: 26.4125% ( 259) 00:08:19.200 14720.394 - 14821.218: 29.2250% ( 225) 00:08:19.200 14821.218 - 14922.043: 32.6750% ( 276) 00:08:19.200 14922.043 - 15022.868: 36.5125% ( 307) 00:08:19.200 15022.868 - 15123.692: 40.1625% ( 292) 00:08:19.200 15123.692 - 15224.517: 43.6750% ( 281) 00:08:19.200 15224.517 - 15325.342: 47.1500% ( 278) 00:08:19.200 15325.342 - 15426.166: 50.4750% ( 266) 00:08:19.200 15426.166 - 15526.991: 53.1750% ( 216) 00:08:19.200 15526.991 - 15627.815: 55.1125% ( 155) 00:08:19.200 15627.815 - 15728.640: 56.7500% ( 131) 00:08:19.200 15728.640 - 15829.465: 58.3000% ( 124) 00:08:19.200 15829.465 - 15930.289: 59.5000% ( 96) 00:08:19.200 15930.289 - 16031.114: 61.0375% ( 123) 00:08:19.200 16031.114 - 16131.938: 62.7875% ( 140) 00:08:19.200 16131.938 - 16232.763: 64.4000% ( 129) 00:08:19.200 16232.763 - 16333.588: 65.6625% ( 101) 00:08:19.200 16333.588 - 16434.412: 67.1125% ( 116) 00:08:19.200 16434.412 - 16535.237: 68.4375% ( 106) 00:08:19.200 16535.237 - 16636.062: 70.2875% ( 148) 00:08:19.200 16636.062 - 16736.886: 71.5000% ( 97) 00:08:19.200 16736.886 - 16837.711: 73.0625% ( 125) 00:08:19.200 16837.711 - 16938.535: 74.3000% ( 99) 00:08:19.200 16938.535 - 17039.360: 75.8125% ( 121) 00:08:19.200 17039.360 - 17140.185: 77.3875% ( 126) 00:08:19.200 17140.185 - 17241.009: 79.2000% ( 145) 00:08:19.200 17241.009 - 17341.834: 80.7875% ( 127) 00:08:19.200 17341.834 - 17442.658: 82.2500% ( 117) 00:08:19.200 17442.658 - 17543.483: 83.6125% ( 109) 00:08:19.200 17543.483 - 17644.308: 85.0125% ( 112) 00:08:19.200 17644.308 - 17745.132: 86.3250% ( 105) 00:08:19.200 17745.132 - 17845.957: 87.3500% ( 82) 00:08:19.200 17845.957 - 17946.782: 88.2125% ( 69) 00:08:19.200 17946.782 - 18047.606: 88.7500% ( 43) 00:08:19.200 18047.606 - 18148.431: 89.2000% ( 36) 00:08:19.200 18148.431 - 18249.255: 89.7875% ( 47) 00:08:19.200 18249.255 - 18350.080: 90.2750% ( 39) 00:08:19.200 18350.080 - 18450.905: 91.0125% ( 59) 00:08:19.200 18450.905 - 18551.729: 91.7500% ( 59) 00:08:19.200 18551.729 - 18652.554: 92.3625% ( 49) 00:08:19.200 18652.554 - 18753.378: 93.2000% ( 67) 00:08:19.200 18753.378 - 18854.203: 93.7375% ( 43) 00:08:19.200 18854.203 - 18955.028: 94.2375% ( 40) 00:08:19.200 18955.028 - 19055.852: 94.6250% ( 31) 00:08:19.200 19055.852 - 19156.677: 95.1250% ( 40) 00:08:19.200 19156.677 - 19257.502: 95.3375% ( 17) 00:08:19.200 19257.502 - 19358.326: 95.5250% ( 15) 00:08:19.200 19358.326 - 19459.151: 95.8625% ( 27) 00:08:19.200 19459.151 - 19559.975: 96.0125% ( 12) 00:08:19.200 19559.975 - 19660.800: 96.1500% ( 11) 00:08:19.200 19660.800 - 19761.625: 96.2750% ( 10) 00:08:19.200 19761.625 - 19862.449: 96.3875% ( 9) 00:08:19.200 19862.449 - 19963.274: 96.5125% ( 10) 00:08:19.200 19963.274 - 20064.098: 96.5750% ( 5) 00:08:19.200 20064.098 - 20164.923: 96.6250% ( 4) 00:08:19.200 20164.923 - 20265.748: 96.7250% ( 8) 00:08:19.200 20265.748 - 20366.572: 96.8500% ( 10) 00:08:19.200 20366.572 - 20467.397: 97.0625% ( 17) 00:08:19.200 20467.397 - 20568.222: 97.2375% ( 14) 00:08:19.200 20568.222 - 20669.046: 97.3500% ( 9) 00:08:19.200 20669.046 - 20769.871: 97.3750% ( 2) 00:08:19.200 20769.871 - 20870.695: 97.4250% ( 4) 00:08:19.200 20870.695 - 20971.520: 97.4750% ( 4) 00:08:19.200 20971.520 - 21072.345: 97.5250% ( 4) 00:08:19.200 21072.345 - 21173.169: 97.6375% ( 9) 00:08:19.201 21173.169 - 21273.994: 97.7500% ( 9) 00:08:19.201 21273.994 - 21374.818: 97.8375% ( 7) 00:08:19.201 21374.818 - 21475.643: 97.9125% ( 6) 00:08:19.201 21475.643 - 21576.468: 97.9625% ( 4) 00:08:19.201 21576.468 - 21677.292: 98.0000% ( 3) 00:08:19.201 21677.292 - 21778.117: 98.0625% ( 5) 00:08:19.201 21778.117 - 21878.942: 98.1125% ( 4) 00:08:19.201 21878.942 - 21979.766: 98.1750% ( 5) 00:08:19.201 21979.766 - 22080.591: 98.2375% ( 5) 00:08:19.201 22080.591 - 22181.415: 98.3000% ( 5) 00:08:19.201 22181.415 - 22282.240: 98.3625% ( 5) 00:08:19.201 22282.240 - 22383.065: 98.4000% ( 3) 00:08:19.201 27827.594 - 28029.243: 98.4125% ( 1) 00:08:19.201 28029.243 - 28230.892: 98.5125% ( 8) 00:08:19.201 28230.892 - 28432.542: 98.6250% ( 9) 00:08:19.201 28432.542 - 28634.191: 98.7125% ( 7) 00:08:19.201 28634.191 - 28835.840: 98.8250% ( 9) 00:08:19.201 28835.840 - 29037.489: 98.9375% ( 9) 00:08:19.201 29037.489 - 29239.138: 99.0500% ( 9) 00:08:19.201 29239.138 - 29440.788: 99.1500% ( 8) 00:08:19.201 29440.788 - 29642.437: 99.2000% ( 4) 00:08:19.201 36901.809 - 37103.458: 99.2250% ( 2) 00:08:19.201 37103.458 - 37305.108: 99.3250% ( 8) 00:08:19.201 37305.108 - 37506.757: 99.4375% ( 9) 00:08:19.201 37506.757 - 37708.406: 99.5375% ( 8) 00:08:19.201 37708.406 - 37910.055: 99.6500% ( 9) 00:08:19.201 37910.055 - 38111.705: 99.7750% ( 10) 00:08:19.201 38111.705 - 38313.354: 99.8875% ( 9) 00:08:19.201 38313.354 - 38515.003: 100.0000% ( 9) 00:08:19.201 00:08:19.201 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:19.201 ============================================================================== 00:08:19.201 Range in us Cumulative IO count 00:08:19.201 9578.338 - 9628.751: 0.0375% ( 3) 00:08:19.201 9628.751 - 9679.163: 0.0750% ( 3) 00:08:19.201 9679.163 - 9729.575: 0.1125% ( 3) 00:08:19.201 9729.575 - 9779.988: 0.1750% ( 5) 00:08:19.201 9779.988 - 9830.400: 0.2625% ( 7) 00:08:19.201 9830.400 - 9880.812: 0.3875% ( 10) 00:08:19.201 9880.812 - 9931.225: 0.5125% ( 10) 00:08:19.201 9931.225 - 9981.637: 0.6000% ( 7) 00:08:19.201 9981.637 - 10032.049: 0.6500% ( 4) 00:08:19.201 10032.049 - 10082.462: 0.6750% ( 2) 00:08:19.201 10082.462 - 10132.874: 0.7125% ( 3) 00:08:19.201 10132.874 - 10183.286: 0.7500% ( 3) 00:08:19.201 10183.286 - 10233.698: 0.7750% ( 2) 00:08:19.201 10233.698 - 10284.111: 0.8000% ( 2) 00:08:19.201 12451.840 - 12502.252: 0.8125% ( 1) 00:08:19.201 12603.077 - 12653.489: 0.8625% ( 4) 00:08:19.201 12653.489 - 12703.902: 0.9000% ( 3) 00:08:19.201 12703.902 - 12754.314: 0.9500% ( 4) 00:08:19.201 12754.314 - 12804.726: 0.9875% ( 3) 00:08:19.201 12804.726 - 12855.138: 1.0500% ( 5) 00:08:19.201 12855.138 - 12905.551: 1.1000% ( 4) 00:08:19.201 12905.551 - 13006.375: 1.3375% ( 19) 00:08:19.201 13006.375 - 13107.200: 1.4750% ( 11) 00:08:19.201 13107.200 - 13208.025: 1.6125% ( 11) 00:08:19.201 13208.025 - 13308.849: 1.7500% ( 11) 00:08:19.201 13308.849 - 13409.674: 2.2875% ( 43) 00:08:19.201 13409.674 - 13510.498: 2.5250% ( 19) 00:08:19.201 13510.498 - 13611.323: 2.8250% ( 24) 00:08:19.201 13611.323 - 13712.148: 3.2750% ( 36) 00:08:19.201 13712.148 - 13812.972: 4.5500% ( 102) 00:08:19.201 13812.972 - 13913.797: 5.9375% ( 111) 00:08:19.201 13913.797 - 14014.622: 7.9250% ( 159) 00:08:19.201 14014.622 - 14115.446: 10.3125% ( 191) 00:08:19.201 14115.446 - 14216.271: 12.6625% ( 188) 00:08:19.201 14216.271 - 14317.095: 14.5625% ( 152) 00:08:19.201 14317.095 - 14417.920: 16.8875% ( 186) 00:08:19.201 14417.920 - 14518.745: 19.4125% ( 202) 00:08:19.201 14518.745 - 14619.569: 22.1500% ( 219) 00:08:19.201 14619.569 - 14720.394: 24.8750% ( 218) 00:08:19.201 14720.394 - 14821.218: 28.5250% ( 292) 00:08:19.201 14821.218 - 14922.043: 32.8250% ( 344) 00:08:19.201 14922.043 - 15022.868: 36.6750% ( 308) 00:08:19.201 15022.868 - 15123.692: 39.7125% ( 243) 00:08:19.201 15123.692 - 15224.517: 42.8375% ( 250) 00:08:19.201 15224.517 - 15325.342: 46.1500% ( 265) 00:08:19.201 15325.342 - 15426.166: 48.6250% ( 198) 00:08:19.201 15426.166 - 15526.991: 51.0875% ( 197) 00:08:19.201 15526.991 - 15627.815: 53.4750% ( 191) 00:08:19.201 15627.815 - 15728.640: 55.9625% ( 199) 00:08:19.201 15728.640 - 15829.465: 57.9625% ( 160) 00:08:19.201 15829.465 - 15930.289: 60.0750% ( 169) 00:08:19.201 15930.289 - 16031.114: 62.0000% ( 154) 00:08:19.201 16031.114 - 16131.938: 64.1125% ( 169) 00:08:19.201 16131.938 - 16232.763: 65.9625% ( 148) 00:08:19.201 16232.763 - 16333.588: 67.8250% ( 149) 00:08:19.201 16333.588 - 16434.412: 69.7625% ( 155) 00:08:19.201 16434.412 - 16535.237: 71.2250% ( 117) 00:08:19.201 16535.237 - 16636.062: 72.7625% ( 123) 00:08:19.201 16636.062 - 16736.886: 74.0875% ( 106) 00:08:19.201 16736.886 - 16837.711: 75.5250% ( 115) 00:08:19.201 16837.711 - 16938.535: 77.0125% ( 119) 00:08:19.201 16938.535 - 17039.360: 78.3250% ( 105) 00:08:19.201 17039.360 - 17140.185: 79.9125% ( 127) 00:08:19.201 17140.185 - 17241.009: 81.5625% ( 132) 00:08:19.201 17241.009 - 17341.834: 83.2375% ( 134) 00:08:19.201 17341.834 - 17442.658: 84.4250% ( 95) 00:08:19.201 17442.658 - 17543.483: 85.3750% ( 76) 00:08:19.201 17543.483 - 17644.308: 86.3875% ( 81) 00:08:19.201 17644.308 - 17745.132: 87.4500% ( 85) 00:08:19.201 17745.132 - 17845.957: 88.1750% ( 58) 00:08:19.201 17845.957 - 17946.782: 88.7500% ( 46) 00:08:19.201 17946.782 - 18047.606: 89.3250% ( 46) 00:08:19.201 18047.606 - 18148.431: 89.9000% ( 46) 00:08:19.201 18148.431 - 18249.255: 90.5625% ( 53) 00:08:19.201 18249.255 - 18350.080: 91.2875% ( 58) 00:08:19.201 18350.080 - 18450.905: 91.9125% ( 50) 00:08:19.201 18450.905 - 18551.729: 92.4125% ( 40) 00:08:19.201 18551.729 - 18652.554: 92.8500% ( 35) 00:08:19.201 18652.554 - 18753.378: 93.2625% ( 33) 00:08:19.201 18753.378 - 18854.203: 93.6250% ( 29) 00:08:19.201 18854.203 - 18955.028: 93.9500% ( 26) 00:08:19.201 18955.028 - 19055.852: 94.2250% ( 22) 00:08:19.201 19055.852 - 19156.677: 94.4625% ( 19) 00:08:19.201 19156.677 - 19257.502: 94.7875% ( 26) 00:08:19.201 19257.502 - 19358.326: 94.9125% ( 10) 00:08:19.201 19358.326 - 19459.151: 95.0250% ( 9) 00:08:19.201 19459.151 - 19559.975: 95.1500% ( 10) 00:08:19.201 19559.975 - 19660.800: 95.2750% ( 10) 00:08:19.201 19660.800 - 19761.625: 95.3875% ( 9) 00:08:19.201 19761.625 - 19862.449: 95.5375% ( 12) 00:08:19.201 19862.449 - 19963.274: 95.7375% ( 16) 00:08:19.201 19963.274 - 20064.098: 95.8875% ( 12) 00:08:19.201 20064.098 - 20164.923: 96.0375% ( 12) 00:08:19.201 20164.923 - 20265.748: 96.1375% ( 8) 00:08:19.201 20265.748 - 20366.572: 96.2250% ( 7) 00:08:19.201 20366.572 - 20467.397: 96.5750% ( 28) 00:08:19.201 20467.397 - 20568.222: 96.6500% ( 6) 00:08:19.201 20568.222 - 20669.046: 96.7000% ( 4) 00:08:19.201 20669.046 - 20769.871: 96.7625% ( 5) 00:08:19.201 20769.871 - 20870.695: 96.8000% ( 3) 00:08:19.201 21273.994 - 21374.818: 96.8125% ( 1) 00:08:19.201 21374.818 - 21475.643: 96.9625% ( 12) 00:08:19.201 21475.643 - 21576.468: 97.1125% ( 12) 00:08:19.201 21576.468 - 21677.292: 97.3000% ( 15) 00:08:19.201 21677.292 - 21778.117: 97.4750% ( 14) 00:08:19.201 21778.117 - 21878.942: 97.7250% ( 20) 00:08:19.201 21878.942 - 21979.766: 97.8750% ( 12) 00:08:19.201 21979.766 - 22080.591: 97.9625% ( 7) 00:08:19.201 22080.591 - 22181.415: 98.0625% ( 8) 00:08:19.201 22181.415 - 22282.240: 98.1750% ( 9) 00:08:19.201 22282.240 - 22383.065: 98.2625% ( 7) 00:08:19.201 22383.065 - 22483.889: 98.3250% ( 5) 00:08:19.201 22483.889 - 22584.714: 98.3875% ( 5) 00:08:19.201 22584.714 - 22685.538: 98.4000% ( 1) 00:08:19.201 28230.892 - 28432.542: 98.4500% ( 4) 00:08:19.201 28432.542 - 28634.191: 98.5625% ( 9) 00:08:19.201 28634.191 - 28835.840: 98.6625% ( 8) 00:08:19.201 28835.840 - 29037.489: 98.7750% ( 9) 00:08:19.201 29037.489 - 29239.138: 98.8375% ( 5) 00:08:19.201 29239.138 - 29440.788: 98.9125% ( 6) 00:08:19.201 29440.788 - 29642.437: 99.0375% ( 10) 00:08:19.201 29642.437 - 29844.086: 99.1375% ( 8) 00:08:19.201 29844.086 - 30045.735: 99.2000% ( 5) 00:08:19.201 37910.055 - 38111.705: 99.2125% ( 1) 00:08:19.201 38111.705 - 38313.354: 99.3125% ( 8) 00:08:19.201 38313.354 - 38515.003: 99.4250% ( 9) 00:08:19.201 38515.003 - 38716.652: 99.5250% ( 8) 00:08:19.201 38716.652 - 38918.302: 99.6250% ( 8) 00:08:19.201 38918.302 - 39119.951: 99.7375% ( 9) 00:08:19.201 39119.951 - 39321.600: 99.8375% ( 8) 00:08:19.201 39321.600 - 39523.249: 99.9500% ( 9) 00:08:19.201 39523.249 - 39724.898: 100.0000% ( 4) 00:08:19.201 00:08:19.201 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:19.201 ============================================================================== 00:08:19.201 Range in us Cumulative IO count 00:08:19.201 8570.092 - 8620.505: 0.0125% ( 1) 00:08:19.201 8620.505 - 8670.917: 0.0250% ( 1) 00:08:19.201 8670.917 - 8721.329: 0.0625% ( 3) 00:08:19.201 8721.329 - 8771.742: 0.1000% ( 3) 00:08:19.201 8771.742 - 8822.154: 0.2000% ( 8) 00:08:19.201 8822.154 - 8872.566: 0.3000% ( 8) 00:08:19.201 8872.566 - 8922.978: 0.3875% ( 7) 00:08:19.201 8922.978 - 8973.391: 0.5125% ( 10) 00:08:19.201 8973.391 - 9023.803: 0.5750% ( 5) 00:08:19.201 9023.803 - 9074.215: 0.6375% ( 5) 00:08:19.201 9074.215 - 9124.628: 0.6875% ( 4) 00:08:19.201 9124.628 - 9175.040: 0.7125% ( 2) 00:08:19.201 9175.040 - 9225.452: 0.7250% ( 1) 00:08:19.201 9225.452 - 9275.865: 0.7625% ( 3) 00:08:19.201 9275.865 - 9326.277: 0.7875% ( 2) 00:08:19.201 9326.277 - 9376.689: 0.8000% ( 1) 00:08:19.201 12552.665 - 12603.077: 0.8125% ( 1) 00:08:19.201 12603.077 - 12653.489: 0.8250% ( 1) 00:08:19.201 12703.902 - 12754.314: 0.8375% ( 1) 00:08:19.201 12804.726 - 12855.138: 0.8625% ( 2) 00:08:19.201 12855.138 - 12905.551: 0.9250% ( 5) 00:08:19.202 12905.551 - 13006.375: 1.1125% ( 15) 00:08:19.202 13006.375 - 13107.200: 1.4375% ( 26) 00:08:19.202 13107.200 - 13208.025: 1.8250% ( 31) 00:08:19.202 13208.025 - 13308.849: 2.1875% ( 29) 00:08:19.202 13308.849 - 13409.674: 2.8000% ( 49) 00:08:19.202 13409.674 - 13510.498: 3.1375% ( 27) 00:08:19.202 13510.498 - 13611.323: 3.4625% ( 26) 00:08:19.202 13611.323 - 13712.148: 3.8875% ( 34) 00:08:19.202 13712.148 - 13812.972: 4.5750% ( 55) 00:08:19.202 13812.972 - 13913.797: 5.5125% ( 75) 00:08:19.202 13913.797 - 14014.622: 6.7000% ( 95) 00:08:19.202 14014.622 - 14115.446: 8.1125% ( 113) 00:08:19.202 14115.446 - 14216.271: 9.7750% ( 133) 00:08:19.202 14216.271 - 14317.095: 11.8750% ( 168) 00:08:19.202 14317.095 - 14417.920: 14.7000% ( 226) 00:08:19.202 14417.920 - 14518.745: 18.7500% ( 324) 00:08:19.202 14518.745 - 14619.569: 22.1750% ( 274) 00:08:19.202 14619.569 - 14720.394: 25.4250% ( 260) 00:08:19.202 14720.394 - 14821.218: 29.2000% ( 302) 00:08:19.202 14821.218 - 14922.043: 32.3375% ( 251) 00:08:19.202 14922.043 - 15022.868: 35.7625% ( 274) 00:08:19.202 15022.868 - 15123.692: 39.4250% ( 293) 00:08:19.202 15123.692 - 15224.517: 42.3375% ( 233) 00:08:19.202 15224.517 - 15325.342: 45.7375% ( 272) 00:08:19.202 15325.342 - 15426.166: 48.9375% ( 256) 00:08:19.202 15426.166 - 15526.991: 52.5125% ( 286) 00:08:19.202 15526.991 - 15627.815: 55.3375% ( 226) 00:08:19.202 15627.815 - 15728.640: 57.6875% ( 188) 00:08:19.202 15728.640 - 15829.465: 60.0500% ( 189) 00:08:19.202 15829.465 - 15930.289: 61.8875% ( 147) 00:08:19.202 15930.289 - 16031.114: 63.3125% ( 114) 00:08:19.202 16031.114 - 16131.938: 65.3875% ( 166) 00:08:19.202 16131.938 - 16232.763: 66.9875% ( 128) 00:08:19.202 16232.763 - 16333.588: 68.4625% ( 118) 00:08:19.202 16333.588 - 16434.412: 69.9125% ( 116) 00:08:19.202 16434.412 - 16535.237: 71.5000% ( 127) 00:08:19.202 16535.237 - 16636.062: 73.4250% ( 154) 00:08:19.202 16636.062 - 16736.886: 75.1000% ( 134) 00:08:19.202 16736.886 - 16837.711: 76.4250% ( 106) 00:08:19.202 16837.711 - 16938.535: 77.8750% ( 116) 00:08:19.202 16938.535 - 17039.360: 78.9250% ( 84) 00:08:19.202 17039.360 - 17140.185: 79.9250% ( 80) 00:08:19.202 17140.185 - 17241.009: 81.0375% ( 89) 00:08:19.202 17241.009 - 17341.834: 82.1375% ( 88) 00:08:19.202 17341.834 - 17442.658: 83.1625% ( 82) 00:08:19.202 17442.658 - 17543.483: 84.0750% ( 73) 00:08:19.202 17543.483 - 17644.308: 84.9750% ( 72) 00:08:19.202 17644.308 - 17745.132: 85.8000% ( 66) 00:08:19.202 17745.132 - 17845.957: 86.6375% ( 67) 00:08:19.202 17845.957 - 17946.782: 87.5625% ( 74) 00:08:19.202 17946.782 - 18047.606: 88.4875% ( 74) 00:08:19.202 18047.606 - 18148.431: 89.4500% ( 77) 00:08:19.202 18148.431 - 18249.255: 90.3875% ( 75) 00:08:19.202 18249.255 - 18350.080: 91.1500% ( 61) 00:08:19.202 18350.080 - 18450.905: 91.9250% ( 62) 00:08:19.202 18450.905 - 18551.729: 92.3875% ( 37) 00:08:19.202 18551.729 - 18652.554: 92.8000% ( 33) 00:08:19.202 18652.554 - 18753.378: 93.1500% ( 28) 00:08:19.202 18753.378 - 18854.203: 93.4875% ( 27) 00:08:19.202 18854.203 - 18955.028: 93.8750% ( 31) 00:08:19.202 18955.028 - 19055.852: 94.1000% ( 18) 00:08:19.202 19055.852 - 19156.677: 94.2750% ( 14) 00:08:19.202 19156.677 - 19257.502: 94.4000% ( 10) 00:08:19.202 19257.502 - 19358.326: 94.8250% ( 34) 00:08:19.202 19358.326 - 19459.151: 95.0750% ( 20) 00:08:19.202 19459.151 - 19559.975: 95.2875% ( 17) 00:08:19.202 19559.975 - 19660.800: 95.5375% ( 20) 00:08:19.202 19660.800 - 19761.625: 95.7125% ( 14) 00:08:19.202 19761.625 - 19862.449: 95.8500% ( 11) 00:08:19.202 19862.449 - 19963.274: 95.9500% ( 8) 00:08:19.202 19963.274 - 20064.098: 96.0875% ( 11) 00:08:19.202 20064.098 - 20164.923: 96.1625% ( 6) 00:08:19.202 20164.923 - 20265.748: 96.2125% ( 4) 00:08:19.202 20265.748 - 20366.572: 96.3000% ( 7) 00:08:19.202 20366.572 - 20467.397: 96.3625% ( 5) 00:08:19.202 20467.397 - 20568.222: 96.4750% ( 9) 00:08:19.202 20568.222 - 20669.046: 96.7000% ( 18) 00:08:19.202 20669.046 - 20769.871: 96.8625% ( 13) 00:08:19.202 20769.871 - 20870.695: 96.9625% ( 8) 00:08:19.202 20870.695 - 20971.520: 97.0750% ( 9) 00:08:19.202 20971.520 - 21072.345: 97.2125% ( 11) 00:08:19.202 21072.345 - 21173.169: 97.3000% ( 7) 00:08:19.202 21173.169 - 21273.994: 97.4250% ( 10) 00:08:19.202 21273.994 - 21374.818: 97.5750% ( 12) 00:08:19.202 21374.818 - 21475.643: 97.7000% ( 10) 00:08:19.202 21475.643 - 21576.468: 97.9250% ( 18) 00:08:19.202 21576.468 - 21677.292: 98.0875% ( 13) 00:08:19.202 21677.292 - 21778.117: 98.2125% ( 10) 00:08:19.202 21778.117 - 21878.942: 98.3000% ( 7) 00:08:19.202 21878.942 - 21979.766: 98.3625% ( 5) 00:08:19.202 21979.766 - 22080.591: 98.4000% ( 3) 00:08:19.202 27625.945 - 27827.594: 98.4750% ( 6) 00:08:19.202 27827.594 - 28029.243: 98.5750% ( 8) 00:08:19.202 28029.243 - 28230.892: 98.6875% ( 9) 00:08:19.202 28230.892 - 28432.542: 98.8000% ( 9) 00:08:19.202 28432.542 - 28634.191: 98.9125% ( 9) 00:08:19.202 28634.191 - 28835.840: 99.0250% ( 9) 00:08:19.202 28835.840 - 29037.489: 99.1375% ( 9) 00:08:19.202 29037.489 - 29239.138: 99.2000% ( 5) 00:08:19.202 37708.406 - 37910.055: 99.3000% ( 8) 00:08:19.202 37910.055 - 38111.705: 99.4125% ( 9) 00:08:19.202 38111.705 - 38313.354: 99.5250% ( 9) 00:08:19.202 38313.354 - 38515.003: 99.6250% ( 8) 00:08:19.202 38515.003 - 38716.652: 99.7250% ( 8) 00:08:19.202 38716.652 - 38918.302: 99.8500% ( 10) 00:08:19.202 38918.302 - 39119.951: 99.9625% ( 9) 00:08:19.202 39119.951 - 39321.600: 100.0000% ( 3) 00:08:19.202 00:08:19.202 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:19.202 ============================================================================== 00:08:19.202 Range in us Cumulative IO count 00:08:19.202 7612.258 - 7662.671: 0.0375% ( 3) 00:08:19.202 7662.671 - 7713.083: 0.1000% ( 5) 00:08:19.202 7713.083 - 7763.495: 0.1500% ( 4) 00:08:19.202 7763.495 - 7813.908: 0.2250% ( 6) 00:08:19.202 7813.908 - 7864.320: 0.3000% ( 6) 00:08:19.202 7864.320 - 7914.732: 0.4375% ( 11) 00:08:19.202 7914.732 - 7965.145: 0.5125% ( 6) 00:08:19.202 7965.145 - 8015.557: 0.5625% ( 4) 00:08:19.202 8015.557 - 8065.969: 0.6000% ( 3) 00:08:19.202 8065.969 - 8116.382: 0.6500% ( 4) 00:08:19.202 8116.382 - 8166.794: 0.7000% ( 4) 00:08:19.202 8166.794 - 8217.206: 0.7375% ( 3) 00:08:19.202 8217.206 - 8267.618: 0.7750% ( 3) 00:08:19.202 8267.618 - 8318.031: 0.8000% ( 2) 00:08:19.202 12703.902 - 12754.314: 0.8375% ( 3) 00:08:19.202 12754.314 - 12804.726: 0.9125% ( 6) 00:08:19.202 12804.726 - 12855.138: 0.9750% ( 5) 00:08:19.202 12855.138 - 12905.551: 1.0750% ( 8) 00:08:19.202 12905.551 - 13006.375: 1.3250% ( 20) 00:08:19.202 13006.375 - 13107.200: 1.6250% ( 24) 00:08:19.202 13107.200 - 13208.025: 1.9875% ( 29) 00:08:19.202 13208.025 - 13308.849: 2.3375% ( 28) 00:08:19.202 13308.849 - 13409.674: 2.5375% ( 16) 00:08:19.202 13409.674 - 13510.498: 2.8125% ( 22) 00:08:19.202 13510.498 - 13611.323: 3.3000% ( 39) 00:08:19.202 13611.323 - 13712.148: 3.8000% ( 40) 00:08:19.202 13712.148 - 13812.972: 4.4375% ( 51) 00:08:19.202 13812.972 - 13913.797: 5.3375% ( 72) 00:08:19.202 13913.797 - 14014.622: 6.8625% ( 122) 00:08:19.202 14014.622 - 14115.446: 9.0750% ( 177) 00:08:19.202 14115.446 - 14216.271: 11.2500% ( 174) 00:08:19.202 14216.271 - 14317.095: 13.3625% ( 169) 00:08:19.202 14317.095 - 14417.920: 16.9125% ( 284) 00:08:19.202 14417.920 - 14518.745: 19.7500% ( 227) 00:08:19.202 14518.745 - 14619.569: 22.3000% ( 204) 00:08:19.202 14619.569 - 14720.394: 25.4875% ( 255) 00:08:19.202 14720.394 - 14821.218: 28.9250% ( 275) 00:08:19.202 14821.218 - 14922.043: 32.0750% ( 252) 00:08:19.202 14922.043 - 15022.868: 36.5125% ( 355) 00:08:19.202 15022.868 - 15123.692: 40.5625% ( 324) 00:08:19.202 15123.692 - 15224.517: 44.7250% ( 333) 00:08:19.202 15224.517 - 15325.342: 47.8625% ( 251) 00:08:19.202 15325.342 - 15426.166: 51.0125% ( 252) 00:08:19.202 15426.166 - 15526.991: 53.7250% ( 217) 00:08:19.202 15526.991 - 15627.815: 56.3500% ( 210) 00:08:19.202 15627.815 - 15728.640: 59.1125% ( 221) 00:08:19.202 15728.640 - 15829.465: 61.2750% ( 173) 00:08:19.202 15829.465 - 15930.289: 62.9875% ( 137) 00:08:19.202 15930.289 - 16031.114: 64.3250% ( 107) 00:08:19.202 16031.114 - 16131.938: 65.6125% ( 103) 00:08:19.202 16131.938 - 16232.763: 66.6250% ( 81) 00:08:19.202 16232.763 - 16333.588: 67.6750% ( 84) 00:08:19.202 16333.588 - 16434.412: 68.7750% ( 88) 00:08:19.202 16434.412 - 16535.237: 70.0250% ( 100) 00:08:19.202 16535.237 - 16636.062: 71.3750% ( 108) 00:08:19.202 16636.062 - 16736.886: 73.0125% ( 131) 00:08:19.202 16736.886 - 16837.711: 74.1750% ( 93) 00:08:19.202 16837.711 - 16938.535: 75.5125% ( 107) 00:08:19.202 16938.535 - 17039.360: 77.2125% ( 136) 00:08:19.202 17039.360 - 17140.185: 78.7000% ( 119) 00:08:19.202 17140.185 - 17241.009: 79.9875% ( 103) 00:08:19.202 17241.009 - 17341.834: 81.2375% ( 100) 00:08:19.202 17341.834 - 17442.658: 82.4875% ( 100) 00:08:19.202 17442.658 - 17543.483: 83.4875% ( 80) 00:08:19.202 17543.483 - 17644.308: 84.3250% ( 67) 00:08:19.202 17644.308 - 17745.132: 85.0375% ( 57) 00:08:19.202 17745.132 - 17845.957: 85.9250% ( 71) 00:08:19.202 17845.957 - 17946.782: 86.9000% ( 78) 00:08:19.202 17946.782 - 18047.606: 87.4500% ( 44) 00:08:19.202 18047.606 - 18148.431: 88.1750% ( 58) 00:08:19.202 18148.431 - 18249.255: 89.2625% ( 87) 00:08:19.202 18249.255 - 18350.080: 90.0125% ( 60) 00:08:19.202 18350.080 - 18450.905: 90.7250% ( 57) 00:08:19.202 18450.905 - 18551.729: 91.3375% ( 49) 00:08:19.202 18551.729 - 18652.554: 92.1125% ( 62) 00:08:19.202 18652.554 - 18753.378: 92.8625% ( 60) 00:08:19.202 18753.378 - 18854.203: 93.3375% ( 38) 00:08:19.202 18854.203 - 18955.028: 93.8500% ( 41) 00:08:19.202 18955.028 - 19055.852: 94.4625% ( 49) 00:08:19.203 19055.852 - 19156.677: 94.9250% ( 37) 00:08:19.203 19156.677 - 19257.502: 95.1625% ( 19) 00:08:19.203 19257.502 - 19358.326: 95.4000% ( 19) 00:08:19.203 19358.326 - 19459.151: 95.5875% ( 15) 00:08:19.203 19459.151 - 19559.975: 95.7000% ( 9) 00:08:19.203 19559.975 - 19660.800: 95.8000% ( 8) 00:08:19.203 19660.800 - 19761.625: 95.8875% ( 7) 00:08:19.203 19761.625 - 19862.449: 96.0500% ( 13) 00:08:19.203 19862.449 - 19963.274: 96.1750% ( 10) 00:08:19.203 19963.274 - 20064.098: 96.2375% ( 5) 00:08:19.203 20064.098 - 20164.923: 96.2750% ( 3) 00:08:19.203 20164.923 - 20265.748: 96.3250% ( 4) 00:08:19.203 20265.748 - 20366.572: 96.3875% ( 5) 00:08:19.203 20366.572 - 20467.397: 96.4500% ( 5) 00:08:19.203 20467.397 - 20568.222: 96.5625% ( 9) 00:08:19.203 20568.222 - 20669.046: 96.7125% ( 12) 00:08:19.203 20669.046 - 20769.871: 96.9250% ( 17) 00:08:19.203 20769.871 - 20870.695: 97.1375% ( 17) 00:08:19.203 20870.695 - 20971.520: 97.3875% ( 20) 00:08:19.203 20971.520 - 21072.345: 97.7125% ( 26) 00:08:19.203 21072.345 - 21173.169: 97.8875% ( 14) 00:08:19.203 21173.169 - 21273.994: 98.0500% ( 13) 00:08:19.203 21273.994 - 21374.818: 98.1875% ( 11) 00:08:19.203 21374.818 - 21475.643: 98.2500% ( 5) 00:08:19.203 21475.643 - 21576.468: 98.3125% ( 5) 00:08:19.203 21576.468 - 21677.292: 98.3750% ( 5) 00:08:19.203 21677.292 - 21778.117: 98.4000% ( 2) 00:08:19.203 27020.997 - 27222.646: 98.4500% ( 4) 00:08:19.203 27222.646 - 27424.295: 98.5375% ( 7) 00:08:19.203 27424.295 - 27625.945: 98.6500% ( 9) 00:08:19.203 27625.945 - 27827.594: 98.7625% ( 9) 00:08:19.203 27827.594 - 28029.243: 98.8750% ( 9) 00:08:19.203 28029.243 - 28230.892: 98.9750% ( 8) 00:08:19.203 28230.892 - 28432.542: 99.0750% ( 8) 00:08:19.203 28432.542 - 28634.191: 99.1875% ( 9) 00:08:19.203 28634.191 - 28835.840: 99.2000% ( 1) 00:08:19.203 36901.809 - 37103.458: 99.2250% ( 2) 00:08:19.203 37103.458 - 37305.108: 99.3375% ( 9) 00:08:19.203 37305.108 - 37506.757: 99.4375% ( 8) 00:08:19.203 37506.757 - 37708.406: 99.5375% ( 8) 00:08:19.203 37708.406 - 37910.055: 99.6500% ( 9) 00:08:19.203 37910.055 - 38111.705: 99.7625% ( 9) 00:08:19.203 38111.705 - 38313.354: 99.8750% ( 9) 00:08:19.203 38313.354 - 38515.003: 99.9875% ( 9) 00:08:19.203 38515.003 - 38716.652: 100.0000% ( 1) 00:08:19.203 00:08:19.203 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:19.203 ============================================================================== 00:08:19.203 Range in us Cumulative IO count 00:08:19.203 7108.135 - 7158.548: 0.0248% ( 2) 00:08:19.203 7158.548 - 7208.960: 0.0620% ( 3) 00:08:19.203 7208.960 - 7259.372: 0.0868% ( 2) 00:08:19.203 7259.372 - 7309.785: 0.1984% ( 9) 00:08:19.203 7309.785 - 7360.197: 0.3472% ( 12) 00:08:19.203 7360.197 - 7410.609: 0.4836% ( 11) 00:08:19.203 7410.609 - 7461.022: 0.5456% ( 5) 00:08:19.203 7461.022 - 7511.434: 0.5704% ( 2) 00:08:19.203 7511.434 - 7561.846: 0.5952% ( 2) 00:08:19.203 7561.846 - 7612.258: 0.6324% ( 3) 00:08:19.203 7612.258 - 7662.671: 0.6572% ( 2) 00:08:19.203 7662.671 - 7713.083: 0.6944% ( 3) 00:08:19.203 7713.083 - 7763.495: 0.7192% ( 2) 00:08:19.203 7763.495 - 7813.908: 0.7440% ( 2) 00:08:19.203 7813.908 - 7864.320: 0.7812% ( 3) 00:08:19.203 7864.320 - 7914.732: 0.7937% ( 1) 00:08:19.203 12451.840 - 12502.252: 0.8185% ( 2) 00:08:19.203 12502.252 - 12552.665: 0.8557% ( 3) 00:08:19.203 12552.665 - 12603.077: 0.8805% ( 2) 00:08:19.203 12603.077 - 12653.489: 0.9053% ( 2) 00:08:19.203 12653.489 - 12703.902: 0.9549% ( 4) 00:08:19.203 12703.902 - 12754.314: 1.0169% ( 5) 00:08:19.203 12754.314 - 12804.726: 1.0665% ( 4) 00:08:19.203 12804.726 - 12855.138: 1.1533% ( 7) 00:08:19.203 12855.138 - 12905.551: 1.3517% ( 16) 00:08:19.203 12905.551 - 13006.375: 1.5873% ( 19) 00:08:19.203 13006.375 - 13107.200: 1.9965% ( 33) 00:08:19.203 13107.200 - 13208.025: 2.5670% ( 46) 00:08:19.203 13208.025 - 13308.849: 3.0382% ( 38) 00:08:19.203 13308.849 - 13409.674: 3.4970% ( 37) 00:08:19.203 13409.674 - 13510.498: 3.8690% ( 30) 00:08:19.203 13510.498 - 13611.323: 4.2535% ( 31) 00:08:19.203 13611.323 - 13712.148: 4.7495% ( 40) 00:08:19.203 13712.148 - 13812.972: 5.4067% ( 53) 00:08:19.203 13812.972 - 13913.797: 6.3740% ( 78) 00:08:19.203 13913.797 - 14014.622: 7.5273% ( 93) 00:08:19.203 14014.622 - 14115.446: 9.2386% ( 138) 00:08:19.203 14115.446 - 14216.271: 11.5451% ( 186) 00:08:19.203 14216.271 - 14317.095: 13.9757% ( 196) 00:08:19.203 14317.095 - 14417.920: 16.8527% ( 232) 00:08:19.203 14417.920 - 14518.745: 19.9777% ( 252) 00:08:19.203 14518.745 - 14619.569: 23.1399% ( 255) 00:08:19.203 14619.569 - 14720.394: 26.5005% ( 271) 00:08:19.203 14720.394 - 14821.218: 29.5511% ( 246) 00:08:19.203 14821.218 - 14922.043: 32.7009% ( 254) 00:08:19.203 14922.043 - 15022.868: 36.5575% ( 311) 00:08:19.203 15022.868 - 15123.692: 40.3770% ( 308) 00:08:19.203 15123.692 - 15224.517: 44.0352% ( 295) 00:08:19.203 15224.517 - 15325.342: 47.0362% ( 242) 00:08:19.203 15325.342 - 15426.166: 49.9876% ( 238) 00:08:19.203 15426.166 - 15526.991: 52.8274% ( 229) 00:08:19.203 15526.991 - 15627.815: 55.4812% ( 214) 00:08:19.203 15627.815 - 15728.640: 57.8993% ( 195) 00:08:19.203 15728.640 - 15829.465: 59.9950% ( 169) 00:08:19.203 15829.465 - 15930.289: 61.9544% ( 158) 00:08:19.203 15930.289 - 16031.114: 63.9261% ( 159) 00:08:19.203 16031.114 - 16131.938: 65.5010% ( 127) 00:08:19.203 16131.938 - 16232.763: 66.6419% ( 92) 00:08:19.203 16232.763 - 16333.588: 67.8819% ( 100) 00:08:19.203 16333.588 - 16434.412: 68.8988% ( 82) 00:08:19.203 16434.412 - 16535.237: 70.2009% ( 105) 00:08:19.203 16535.237 - 16636.062: 71.0441% ( 68) 00:08:19.203 16636.062 - 16736.886: 72.3338% ( 104) 00:08:19.203 16736.886 - 16837.711: 73.3879% ( 85) 00:08:19.203 16837.711 - 16938.535: 74.4296% ( 84) 00:08:19.203 16938.535 - 17039.360: 75.8805% ( 117) 00:08:19.203 17039.360 - 17140.185: 77.2941% ( 114) 00:08:19.203 17140.185 - 17241.009: 78.9062% ( 130) 00:08:19.203 17241.009 - 17341.834: 80.4688% ( 126) 00:08:19.203 17341.834 - 17442.658: 81.7956% ( 107) 00:08:19.203 17442.658 - 17543.483: 82.8125% ( 82) 00:08:19.203 17543.483 - 17644.308: 83.8418% ( 83) 00:08:19.203 17644.308 - 17745.132: 85.3919% ( 125) 00:08:19.203 17745.132 - 17845.957: 86.5079% ( 90) 00:08:19.203 17845.957 - 17946.782: 87.4008% ( 72) 00:08:19.203 17946.782 - 18047.606: 88.1696% ( 62) 00:08:19.203 18047.606 - 18148.431: 88.7773% ( 49) 00:08:19.203 18148.431 - 18249.255: 89.3849% ( 49) 00:08:19.203 18249.255 - 18350.080: 89.9058% ( 42) 00:08:19.203 18350.080 - 18450.905: 90.4514% ( 44) 00:08:19.203 18450.905 - 18551.729: 90.9598% ( 41) 00:08:19.203 18551.729 - 18652.554: 91.4311% ( 38) 00:08:19.203 18652.554 - 18753.378: 91.7535% ( 26) 00:08:19.203 18753.378 - 18854.203: 92.2123% ( 37) 00:08:19.203 18854.203 - 18955.028: 92.8075% ( 48) 00:08:19.203 18955.028 - 19055.852: 93.2540% ( 36) 00:08:19.203 19055.852 - 19156.677: 94.1716% ( 74) 00:08:19.203 19156.677 - 19257.502: 94.6181% ( 36) 00:08:19.203 19257.502 - 19358.326: 94.9901% ( 30) 00:08:19.203 19358.326 - 19459.151: 95.3497% ( 29) 00:08:19.203 19459.151 - 19559.975: 95.5977% ( 20) 00:08:19.203 19559.975 - 19660.800: 95.8705% ( 22) 00:08:19.203 19660.800 - 19761.625: 96.1806% ( 25) 00:08:19.203 19761.625 - 19862.449: 96.5774% ( 32) 00:08:19.203 19862.449 - 19963.274: 96.9370% ( 29) 00:08:19.203 19963.274 - 20064.098: 97.3958% ( 37) 00:08:19.203 20064.098 - 20164.923: 97.8919% ( 40) 00:08:19.203 20164.923 - 20265.748: 98.1523% ( 21) 00:08:19.203 20265.748 - 20366.572: 98.3383% ( 15) 00:08:19.203 20366.572 - 20467.397: 98.4995% ( 13) 00:08:19.203 20467.397 - 20568.222: 98.6359% ( 11) 00:08:19.203 20568.222 - 20669.046: 98.8839% ( 20) 00:08:19.203 20669.046 - 20769.871: 99.0327% ( 12) 00:08:19.203 20769.871 - 20870.695: 99.1071% ( 6) 00:08:19.203 20870.695 - 20971.520: 99.1691% ( 5) 00:08:19.203 20971.520 - 21072.345: 99.2063% ( 3) 00:08:19.203 27625.945 - 27827.594: 99.2808% ( 6) 00:08:19.203 27827.594 - 28029.243: 99.3924% ( 9) 00:08:19.203 28029.243 - 28230.892: 99.5040% ( 9) 00:08:19.204 28230.892 - 28432.542: 99.6032% ( 8) 00:08:19.204 28432.542 - 28634.191: 99.7024% ( 8) 00:08:19.204 28634.191 - 28835.840: 99.8264% ( 10) 00:08:19.204 28835.840 - 29037.489: 99.9380% ( 9) 00:08:19.204 29037.489 - 29239.138: 100.0000% ( 5) 00:08:19.204 00:08:19.204 22:52:58 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:19.204 00:08:19.204 real 0m2.562s 00:08:19.204 user 0m2.149s 00:08:19.204 sys 0m0.271s 00:08:19.204 22:52:58 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.204 22:52:58 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.204 ************************************ 00:08:19.204 END TEST nvme_perf 00:08:19.204 ************************************ 00:08:19.204 22:52:58 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:19.204 22:52:58 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:19.204 22:52:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.204 22:52:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.204 ************************************ 00:08:19.204 START TEST nvme_hello_world 00:08:19.204 ************************************ 00:08:19.204 22:52:58 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:19.465 Initializing NVMe Controllers 00:08:19.465 Attached to 0000:00:10.0 00:08:19.465 Namespace ID: 1 size: 6GB 00:08:19.465 Attached to 0000:00:11.0 00:08:19.465 Namespace ID: 1 size: 5GB 00:08:19.465 Attached to 0000:00:13.0 00:08:19.465 Namespace ID: 1 size: 1GB 00:08:19.465 Attached to 0000:00:12.0 00:08:19.465 Namespace ID: 1 size: 4GB 00:08:19.465 Namespace ID: 2 size: 4GB 00:08:19.465 Namespace ID: 3 size: 4GB 00:08:19.465 Initialization complete. 00:08:19.465 INFO: using host memory buffer for IO 00:08:19.465 Hello world! 00:08:19.465 INFO: using host memory buffer for IO 00:08:19.465 Hello world! 00:08:19.465 INFO: using host memory buffer for IO 00:08:19.465 Hello world! 00:08:19.465 INFO: using host memory buffer for IO 00:08:19.465 Hello world! 00:08:19.465 INFO: using host memory buffer for IO 00:08:19.465 Hello world! 00:08:19.465 INFO: using host memory buffer for IO 00:08:19.465 Hello world! 00:08:19.465 00:08:19.465 real 0m0.252s 00:08:19.465 user 0m0.086s 00:08:19.465 sys 0m0.120s 00:08:19.465 ************************************ 00:08:19.465 END TEST nvme_hello_world 00:08:19.465 ************************************ 00:08:19.465 22:52:58 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.465 22:52:58 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:19.465 22:52:58 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:19.465 22:52:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.465 22:52:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.465 22:52:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.465 ************************************ 00:08:19.465 START TEST nvme_sgl 00:08:19.465 ************************************ 00:08:19.465 22:52:58 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:19.727 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:19.727 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:19.727 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:19.727 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:19.727 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:19.727 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:19.727 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:19.727 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:19.727 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:19.727 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:19.727 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:19.727 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:19.727 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:19.727 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:19.727 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:19.727 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:19.727 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:19.727 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:19.728 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:19.728 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:19.728 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:19.728 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:19.728 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:19.728 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:19.728 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:19.728 NVMe Readv/Writev Request test 00:08:19.728 Attached to 0000:00:10.0 00:08:19.728 Attached to 0000:00:11.0 00:08:19.728 Attached to 0000:00:13.0 00:08:19.728 Attached to 0000:00:12.0 00:08:19.728 0000:00:10.0: build_io_request_2 test passed 00:08:19.728 0000:00:10.0: build_io_request_4 test passed 00:08:19.728 0000:00:10.0: build_io_request_5 test passed 00:08:19.728 0000:00:10.0: build_io_request_6 test passed 00:08:19.728 0000:00:10.0: build_io_request_7 test passed 00:08:19.728 0000:00:10.0: build_io_request_10 test passed 00:08:19.728 0000:00:11.0: build_io_request_2 test passed 00:08:19.728 0000:00:11.0: build_io_request_4 test passed 00:08:19.728 0000:00:11.0: build_io_request_5 test passed 00:08:19.728 0000:00:11.0: build_io_request_6 test passed 00:08:19.728 0000:00:11.0: build_io_request_7 test passed 00:08:19.728 0000:00:11.0: build_io_request_10 test passed 00:08:19.728 Cleaning up... 00:08:19.728 00:08:19.728 real 0m0.313s 00:08:19.728 user 0m0.138s 00:08:19.728 sys 0m0.127s 00:08:19.728 22:52:58 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.728 ************************************ 00:08:19.728 END TEST nvme_sgl 00:08:19.728 22:52:58 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:19.728 ************************************ 00:08:19.728 22:52:58 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:19.728 22:52:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.728 22:52:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.728 22:52:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.990 ************************************ 00:08:19.990 START TEST nvme_e2edp 00:08:19.990 ************************************ 00:08:19.990 22:52:58 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:19.990 NVMe Write/Read with End-to-End data protection test 00:08:19.990 Attached to 0000:00:10.0 00:08:19.990 Attached to 0000:00:11.0 00:08:19.990 Attached to 0000:00:13.0 00:08:19.990 Attached to 0000:00:12.0 00:08:19.990 Cleaning up... 00:08:19.990 00:08:19.990 real 0m0.232s 00:08:19.990 user 0m0.065s 00:08:19.990 sys 0m0.123s 00:08:19.990 ************************************ 00:08:19.990 22:52:59 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.990 22:52:59 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:19.990 END TEST nvme_e2edp 00:08:19.990 ************************************ 00:08:20.252 22:52:59 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:20.252 22:52:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.252 22:52:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.252 22:52:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.252 ************************************ 00:08:20.252 START TEST nvme_reserve 00:08:20.252 ************************************ 00:08:20.252 22:52:59 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:20.517 ===================================================== 00:08:20.517 NVMe Controller at PCI bus 0, device 16, function 0 00:08:20.517 ===================================================== 00:08:20.517 Reservations: Not Supported 00:08:20.517 ===================================================== 00:08:20.517 NVMe Controller at PCI bus 0, device 17, function 0 00:08:20.517 ===================================================== 00:08:20.517 Reservations: Not Supported 00:08:20.517 ===================================================== 00:08:20.517 NVMe Controller at PCI bus 0, device 19, function 0 00:08:20.517 ===================================================== 00:08:20.517 Reservations: Not Supported 00:08:20.517 ===================================================== 00:08:20.517 NVMe Controller at PCI bus 0, device 18, function 0 00:08:20.517 ===================================================== 00:08:20.517 Reservations: Not Supported 00:08:20.517 Reservation test passed 00:08:20.517 00:08:20.517 real 0m0.240s 00:08:20.517 user 0m0.072s 00:08:20.517 sys 0m0.118s 00:08:20.517 22:52:59 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.517 ************************************ 00:08:20.517 END TEST nvme_reserve 00:08:20.517 ************************************ 00:08:20.517 22:52:59 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:20.517 22:52:59 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:20.518 22:52:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.518 22:52:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.518 22:52:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.518 ************************************ 00:08:20.518 START TEST nvme_err_injection 00:08:20.518 ************************************ 00:08:20.518 22:52:59 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:20.780 NVMe Error Injection test 00:08:20.780 Attached to 0000:00:10.0 00:08:20.780 Attached to 0000:00:11.0 00:08:20.780 Attached to 0000:00:13.0 00:08:20.780 Attached to 0000:00:12.0 00:08:20.780 0000:00:12.0: get features failed as expected 00:08:20.780 0000:00:10.0: get features failed as expected 00:08:20.780 0000:00:11.0: get features failed as expected 00:08:20.780 0000:00:13.0: get features failed as expected 00:08:20.780 0000:00:10.0: get features successfully as expected 00:08:20.780 0000:00:11.0: get features successfully as expected 00:08:20.781 0000:00:13.0: get features successfully as expected 00:08:20.781 0000:00:12.0: get features successfully as expected 00:08:20.781 0000:00:12.0: read failed as expected 00:08:20.781 0000:00:10.0: read failed as expected 00:08:20.781 0000:00:11.0: read failed as expected 00:08:20.781 0000:00:13.0: read failed as expected 00:08:20.781 0000:00:12.0: read successfully as expected 00:08:20.781 0000:00:10.0: read successfully as expected 00:08:20.781 0000:00:11.0: read successfully as expected 00:08:20.781 0000:00:13.0: read successfully as expected 00:08:20.781 Cleaning up... 00:08:20.781 00:08:20.781 real 0m0.256s 00:08:20.781 user 0m0.093s 00:08:20.781 sys 0m0.114s 00:08:20.781 22:52:59 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.781 22:52:59 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:20.781 ************************************ 00:08:20.781 END TEST nvme_err_injection 00:08:20.781 ************************************ 00:08:20.781 22:52:59 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:20.781 22:52:59 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:20.781 22:52:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.781 22:52:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.781 ************************************ 00:08:20.781 START TEST nvme_overhead 00:08:20.781 ************************************ 00:08:20.781 22:52:59 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:22.169 Initializing NVMe Controllers 00:08:22.169 Attached to 0000:00:10.0 00:08:22.169 Attached to 0000:00:11.0 00:08:22.169 Attached to 0000:00:13.0 00:08:22.169 Attached to 0000:00:12.0 00:08:22.169 Initialization complete. Launching workers. 00:08:22.169 submit (in ns) avg, min, max = 16683.3, 13206.9, 313937.7 00:08:22.169 complete (in ns) avg, min, max = 9664.3, 7816.9, 566052.3 00:08:22.169 00:08:22.169 Submit histogram 00:08:22.169 ================ 00:08:22.169 Range in us Cumulative Count 00:08:22.169 13.194 - 13.292: 0.0712% ( 2) 00:08:22.169 13.391 - 13.489: 0.1425% ( 2) 00:08:22.169 13.489 - 13.588: 0.2137% ( 2) 00:08:22.169 13.686 - 13.785: 0.4986% ( 8) 00:08:22.169 13.785 - 13.883: 0.8191% ( 9) 00:08:22.169 13.883 - 13.982: 1.5670% ( 21) 00:08:22.169 13.982 - 14.080: 3.0983% ( 43) 00:08:22.169 14.080 - 14.178: 5.0926% ( 56) 00:08:22.169 14.178 - 14.277: 7.4786% ( 67) 00:08:22.169 14.277 - 14.375: 10.4701% ( 84) 00:08:22.169 14.375 - 14.474: 13.2479% ( 78) 00:08:22.169 14.474 - 14.572: 16.3462% ( 87) 00:08:22.169 14.572 - 14.671: 19.4444% ( 87) 00:08:22.169 14.671 - 14.769: 22.3647% ( 82) 00:08:22.169 14.769 - 14.868: 26.9231% ( 128) 00:08:22.169 14.868 - 14.966: 31.8732% ( 139) 00:08:22.169 14.966 - 15.065: 37.5000% ( 158) 00:08:22.169 15.065 - 15.163: 43.1624% ( 159) 00:08:22.169 15.163 - 15.262: 49.4302% ( 176) 00:08:22.169 15.262 - 15.360: 55.4487% ( 169) 00:08:22.169 15.360 - 15.458: 60.8262% ( 151) 00:08:22.169 15.458 - 15.557: 64.7436% ( 110) 00:08:22.169 15.557 - 15.655: 68.4473% ( 104) 00:08:22.169 15.655 - 15.754: 70.8333% ( 67) 00:08:22.169 15.754 - 15.852: 72.6496% ( 51) 00:08:22.169 15.852 - 15.951: 73.8604% ( 34) 00:08:22.169 15.951 - 16.049: 75.1781% ( 37) 00:08:22.169 16.049 - 16.148: 76.2464% ( 30) 00:08:22.169 16.148 - 16.246: 76.6026% ( 10) 00:08:22.169 16.246 - 16.345: 77.1368% ( 15) 00:08:22.169 16.345 - 16.443: 77.8134% ( 19) 00:08:22.169 16.443 - 16.542: 78.0983% ( 8) 00:08:22.169 16.542 - 16.640: 78.6681% ( 16) 00:08:22.169 16.640 - 16.738: 78.9530% ( 8) 00:08:22.169 16.738 - 16.837: 79.3091% ( 10) 00:08:22.169 16.837 - 16.935: 79.7365% ( 12) 00:08:22.169 16.935 - 17.034: 80.0570% ( 9) 00:08:22.169 17.034 - 17.132: 80.5199% ( 13) 00:08:22.169 17.132 - 17.231: 80.8761% ( 10) 00:08:22.169 17.231 - 17.329: 81.3034% ( 12) 00:08:22.169 17.329 - 17.428: 81.5527% ( 7) 00:08:22.169 17.428 - 17.526: 81.9088% ( 10) 00:08:22.169 17.526 - 17.625: 82.5142% ( 17) 00:08:22.169 17.625 - 17.723: 82.9060% ( 11) 00:08:22.169 17.723 - 17.822: 83.0840% ( 5) 00:08:22.169 17.822 - 17.920: 83.5826% ( 14) 00:08:22.169 17.920 - 18.018: 84.0456% ( 13) 00:08:22.169 18.018 - 18.117: 84.4017% ( 10) 00:08:22.169 18.117 - 18.215: 84.7222% ( 9) 00:08:22.169 18.215 - 18.314: 85.0071% ( 8) 00:08:22.169 18.314 - 18.412: 85.2920% ( 8) 00:08:22.169 18.412 - 18.511: 85.3989% ( 3) 00:08:22.169 18.511 - 18.609: 85.7194% ( 9) 00:08:22.169 18.609 - 18.708: 86.0043% ( 8) 00:08:22.169 18.708 - 18.806: 86.2536% ( 7) 00:08:22.169 18.806 - 18.905: 86.3248% ( 2) 00:08:22.169 18.905 - 19.003: 86.6453% ( 9) 00:08:22.169 19.003 - 19.102: 87.0370% ( 11) 00:08:22.169 19.102 - 19.200: 87.3575% ( 9) 00:08:22.170 19.200 - 19.298: 87.7137% ( 10) 00:08:22.170 19.298 - 19.397: 88.1054% ( 11) 00:08:22.170 19.397 - 19.495: 88.4259% ( 9) 00:08:22.170 19.495 - 19.594: 88.6396% ( 6) 00:08:22.170 19.594 - 19.692: 88.9957% ( 10) 00:08:22.170 19.692 - 19.791: 89.2450% ( 7) 00:08:22.170 19.791 - 19.889: 89.3162% ( 2) 00:08:22.170 19.889 - 19.988: 89.5299% ( 6) 00:08:22.170 19.988 - 20.086: 89.6368% ( 3) 00:08:22.170 20.086 - 20.185: 89.8504% ( 6) 00:08:22.170 20.185 - 20.283: 90.1353% ( 8) 00:08:22.170 20.283 - 20.382: 90.4558% ( 9) 00:08:22.170 20.382 - 20.480: 90.8476% ( 11) 00:08:22.170 20.480 - 20.578: 91.1681% ( 9) 00:08:22.170 20.578 - 20.677: 91.4174% ( 7) 00:08:22.170 20.677 - 20.775: 91.5598% ( 4) 00:08:22.170 20.775 - 20.874: 91.9872% ( 12) 00:08:22.170 20.874 - 20.972: 92.3433% ( 10) 00:08:22.170 20.972 - 21.071: 92.6994% ( 10) 00:08:22.170 21.071 - 21.169: 92.9131% ( 6) 00:08:22.170 21.169 - 21.268: 93.1268% ( 6) 00:08:22.170 21.268 - 21.366: 93.3761% ( 7) 00:08:22.170 21.366 - 21.465: 93.6254% ( 7) 00:08:22.170 21.465 - 21.563: 93.9103% ( 8) 00:08:22.170 21.563 - 21.662: 94.2308% ( 9) 00:08:22.170 21.662 - 21.760: 94.3376% ( 3) 00:08:22.170 21.760 - 21.858: 94.5157% ( 5) 00:08:22.170 21.858 - 21.957: 94.7293% ( 6) 00:08:22.170 21.957 - 22.055: 94.8718% ( 4) 00:08:22.170 22.055 - 22.154: 94.9074% ( 1) 00:08:22.170 22.154 - 22.252: 94.9430% ( 1) 00:08:22.170 22.351 - 22.449: 95.0499% ( 3) 00:08:22.170 22.449 - 22.548: 95.1567% ( 3) 00:08:22.170 22.548 - 22.646: 95.2279% ( 2) 00:08:22.170 22.646 - 22.745: 95.3348% ( 3) 00:08:22.170 22.745 - 22.843: 95.5128% ( 5) 00:08:22.170 22.843 - 22.942: 95.6197% ( 3) 00:08:22.170 22.942 - 23.040: 95.7265% ( 3) 00:08:22.170 23.040 - 23.138: 95.8689% ( 4) 00:08:22.170 23.138 - 23.237: 95.9758% ( 3) 00:08:22.170 23.237 - 23.335: 96.2251% ( 7) 00:08:22.170 23.335 - 23.434: 96.4744% ( 7) 00:08:22.170 23.532 - 23.631: 96.6168% ( 4) 00:08:22.170 23.631 - 23.729: 96.7236% ( 3) 00:08:22.170 23.729 - 23.828: 96.8661% ( 4) 00:08:22.170 23.828 - 23.926: 96.9729% ( 3) 00:08:22.170 23.926 - 24.025: 97.0085% ( 1) 00:08:22.170 24.025 - 24.123: 97.0798% ( 2) 00:08:22.170 24.123 - 24.222: 97.1510% ( 2) 00:08:22.170 24.222 - 24.320: 97.2222% ( 2) 00:08:22.170 24.418 - 24.517: 97.2934% ( 2) 00:08:22.170 24.517 - 24.615: 97.3291% ( 1) 00:08:22.170 24.615 - 24.714: 97.4003% ( 2) 00:08:22.170 24.911 - 25.009: 97.4715% ( 2) 00:08:22.170 25.009 - 25.108: 97.5071% ( 1) 00:08:22.170 25.108 - 25.206: 97.5427% ( 1) 00:08:22.170 25.206 - 25.403: 97.6140% ( 2) 00:08:22.170 25.403 - 25.600: 97.7920% ( 5) 00:08:22.170 25.600 - 25.797: 97.8632% ( 2) 00:08:22.170 25.797 - 25.994: 97.9345% ( 2) 00:08:22.170 25.994 - 26.191: 98.0057% ( 2) 00:08:22.170 26.191 - 26.388: 98.0413% ( 1) 00:08:22.170 26.388 - 26.585: 98.1125% ( 2) 00:08:22.170 26.978 - 27.175: 98.1838% ( 2) 00:08:22.170 27.372 - 27.569: 98.2194% ( 1) 00:08:22.170 27.766 - 27.963: 98.2906% ( 2) 00:08:22.170 28.160 - 28.357: 98.3262% ( 1) 00:08:22.170 28.357 - 28.554: 98.3974% ( 2) 00:08:22.170 28.554 - 28.751: 98.4330% ( 1) 00:08:22.170 28.751 - 28.948: 98.5043% ( 2) 00:08:22.170 29.538 - 29.735: 98.5399% ( 1) 00:08:22.170 29.932 - 30.129: 98.5755% ( 1) 00:08:22.170 30.129 - 30.326: 98.6111% ( 1) 00:08:22.170 30.720 - 30.917: 98.6467% ( 1) 00:08:22.170 31.114 - 31.311: 98.6823% ( 1) 00:08:22.170 31.311 - 31.508: 98.7179% ( 1) 00:08:22.170 31.508 - 31.705: 98.7536% ( 1) 00:08:22.170 31.902 - 32.098: 98.7892% ( 1) 00:08:22.170 32.098 - 32.295: 98.8248% ( 1) 00:08:22.170 32.295 - 32.492: 98.8604% ( 1) 00:08:22.170 32.886 - 33.083: 98.8960% ( 1) 00:08:22.170 33.477 - 33.674: 98.9672% ( 2) 00:08:22.170 33.871 - 34.068: 99.0028% ( 1) 00:08:22.170 34.068 - 34.265: 99.0385% ( 1) 00:08:22.170 34.265 - 34.462: 99.1097% ( 2) 00:08:22.170 34.855 - 35.052: 99.1809% ( 2) 00:08:22.170 35.446 - 35.643: 99.2165% ( 1) 00:08:22.170 36.825 - 37.022: 99.2521% ( 1) 00:08:22.170 37.612 - 37.809: 99.2877% ( 1) 00:08:22.170 38.991 - 39.188: 99.3234% ( 1) 00:08:22.170 39.582 - 39.778: 99.3590% ( 1) 00:08:22.170 43.520 - 43.717: 99.3946% ( 1) 00:08:22.170 44.702 - 44.898: 99.4302% ( 1) 00:08:22.170 45.883 - 46.080: 99.4658% ( 1) 00:08:22.170 53.563 - 53.957: 99.5014% ( 1) 00:08:22.170 56.320 - 56.714: 99.5370% ( 1) 00:08:22.170 59.471 - 59.865: 99.5726% ( 1) 00:08:22.170 69.317 - 69.711: 99.6083% ( 1) 00:08:22.170 70.105 - 70.498: 99.6795% ( 2) 00:08:22.170 70.498 - 70.892: 99.7151% ( 1) 00:08:22.170 74.831 - 75.225: 99.7507% ( 1) 00:08:22.170 85.858 - 86.252: 99.7863% ( 1) 00:08:22.170 89.403 - 89.797: 99.8219% ( 1) 00:08:22.170 100.825 - 101.612: 99.8575% ( 1) 00:08:22.170 101.612 - 102.400: 99.8932% ( 1) 00:08:22.170 107.126 - 107.914: 99.9288% ( 1) 00:08:22.170 134.695 - 135.483: 99.9644% ( 1) 00:08:22.170 313.502 - 315.077: 100.0000% ( 1) 00:08:22.170 00:08:22.170 Complete histogram 00:08:22.170 ================== 00:08:22.170 Range in us Cumulative Count 00:08:22.170 7.778 - 7.828: 0.0356% ( 1) 00:08:22.170 7.828 - 7.877: 0.2493% ( 6) 00:08:22.170 7.877 - 7.926: 0.8547% ( 17) 00:08:22.170 7.926 - 7.975: 2.1011% ( 35) 00:08:22.170 7.975 - 8.025: 4.6652% ( 72) 00:08:22.170 8.025 - 8.074: 7.3362% ( 75) 00:08:22.170 8.074 - 8.123: 10.1496% ( 79) 00:08:22.170 8.123 - 8.172: 14.2094% ( 114) 00:08:22.170 8.172 - 8.222: 17.6994% ( 98) 00:08:22.170 8.222 - 8.271: 22.0442% ( 122) 00:08:22.170 8.271 - 8.320: 27.8490% ( 163) 00:08:22.170 8.320 - 8.369: 36.1823% ( 234) 00:08:22.170 8.369 - 8.418: 43.9815% ( 219) 00:08:22.170 8.418 - 8.468: 50.9615% ( 196) 00:08:22.170 8.468 - 8.517: 58.1553% ( 202) 00:08:22.170 8.517 - 8.566: 64.6011% ( 181) 00:08:22.170 8.566 - 8.615: 69.6581% ( 142) 00:08:22.170 8.615 - 8.665: 73.5399% ( 109) 00:08:22.170 8.665 - 8.714: 77.5285% ( 112) 00:08:22.170 8.714 - 8.763: 80.2707% ( 77) 00:08:22.170 8.763 - 8.812: 82.5855% ( 65) 00:08:22.170 8.812 - 8.862: 84.3661% ( 50) 00:08:22.170 8.862 - 8.911: 85.4345% ( 30) 00:08:22.170 8.911 - 8.960: 86.4316% ( 28) 00:08:22.170 8.960 - 9.009: 87.2863% ( 24) 00:08:22.170 9.009 - 9.058: 87.8205% ( 15) 00:08:22.170 9.058 - 9.108: 88.8889% ( 30) 00:08:22.170 9.108 - 9.157: 89.4943% ( 17) 00:08:22.170 9.157 - 9.206: 90.0641% ( 16) 00:08:22.170 9.206 - 9.255: 90.5271% ( 13) 00:08:22.170 9.255 - 9.305: 90.8120% ( 8) 00:08:22.170 9.305 - 9.354: 91.0613% ( 7) 00:08:22.170 9.354 - 9.403: 91.4174% ( 10) 00:08:22.170 9.403 - 9.452: 91.8091% ( 11) 00:08:22.170 9.452 - 9.502: 92.0584% ( 7) 00:08:22.170 9.502 - 9.551: 92.4145% ( 10) 00:08:22.170 9.551 - 9.600: 92.6282% ( 6) 00:08:22.170 9.600 - 9.649: 92.9843% ( 10) 00:08:22.170 9.649 - 9.698: 93.3048% ( 9) 00:08:22.170 9.698 - 9.748: 93.6254% ( 9) 00:08:22.170 9.748 - 9.797: 93.8746% ( 7) 00:08:22.170 9.797 - 9.846: 93.9459% ( 2) 00:08:22.170 9.846 - 9.895: 94.1952% ( 7) 00:08:22.170 9.895 - 9.945: 94.3732% ( 5) 00:08:22.170 9.945 - 9.994: 94.4801% ( 3) 00:08:22.170 9.994 - 10.043: 94.7650% ( 8) 00:08:22.170 10.043 - 10.092: 94.9430% ( 5) 00:08:22.170 10.092 - 10.142: 95.1211% ( 5) 00:08:22.170 10.142 - 10.191: 95.2635% ( 4) 00:08:22.170 10.191 - 10.240: 95.3704% ( 3) 00:08:22.170 10.240 - 10.289: 95.4416% ( 2) 00:08:22.170 10.289 - 10.338: 95.5128% ( 2) 00:08:22.170 10.338 - 10.388: 95.6197% ( 3) 00:08:22.170 10.388 - 10.437: 95.7265% ( 3) 00:08:22.170 10.437 - 10.486: 95.7621% ( 1) 00:08:22.170 10.486 - 10.535: 95.8689% ( 3) 00:08:22.170 10.535 - 10.585: 95.9402% ( 2) 00:08:22.170 10.683 - 10.732: 96.0470% ( 3) 00:08:22.170 10.732 - 10.782: 96.0826% ( 1) 00:08:22.170 10.782 - 10.831: 96.1182% ( 1) 00:08:22.170 10.978 - 11.028: 96.1538% ( 1) 00:08:22.170 11.028 - 11.077: 96.1895% ( 1) 00:08:22.170 11.077 - 11.126: 96.2251% ( 1) 00:08:22.170 11.126 - 11.175: 96.2607% ( 1) 00:08:22.170 11.175 - 11.225: 96.2963% ( 1) 00:08:22.170 11.323 - 11.372: 96.3675% ( 2) 00:08:22.170 11.372 - 11.422: 96.4031% ( 1) 00:08:22.170 11.422 - 11.471: 96.4387% ( 1) 00:08:22.170 11.520 - 11.569: 96.4744% ( 1) 00:08:22.170 11.618 - 11.668: 96.5812% ( 3) 00:08:22.170 11.865 - 11.914: 96.6880% ( 3) 00:08:22.170 11.914 - 11.963: 96.7236% ( 1) 00:08:22.170 12.209 - 12.258: 96.7593% ( 1) 00:08:22.170 12.357 - 12.406: 96.8305% ( 2) 00:08:22.170 12.406 - 12.455: 96.8661% ( 1) 00:08:22.170 12.554 - 12.603: 96.9017% ( 1) 00:08:22.170 12.898 - 12.997: 96.9373% ( 1) 00:08:22.170 13.095 - 13.194: 96.9729% ( 1) 00:08:22.170 14.671 - 14.769: 97.0085% ( 1) 00:08:22.170 15.065 - 15.163: 97.0442% ( 1) 00:08:22.171 15.163 - 15.262: 97.0798% ( 1) 00:08:22.171 15.262 - 15.360: 97.1510% ( 2) 00:08:22.171 15.360 - 15.458: 97.2578% ( 3) 00:08:22.171 15.458 - 15.557: 97.2934% ( 1) 00:08:22.171 15.754 - 15.852: 97.4715% ( 5) 00:08:22.171 15.951 - 16.049: 97.6140% ( 4) 00:08:22.171 16.148 - 16.246: 97.6852% ( 2) 00:08:22.171 16.345 - 16.443: 97.8989% ( 6) 00:08:22.171 16.443 - 16.542: 97.9345% ( 1) 00:08:22.171 16.542 - 16.640: 97.9701% ( 1) 00:08:22.171 16.640 - 16.738: 98.0413% ( 2) 00:08:22.171 16.738 - 16.837: 98.1125% ( 2) 00:08:22.171 17.034 - 17.132: 98.1481% ( 1) 00:08:22.171 17.132 - 17.231: 98.1838% ( 1) 00:08:22.171 17.428 - 17.526: 98.2550% ( 2) 00:08:22.171 17.625 - 17.723: 98.3618% ( 3) 00:08:22.171 17.723 - 17.822: 98.4330% ( 2) 00:08:22.171 18.018 - 18.117: 98.4687% ( 1) 00:08:22.171 18.117 - 18.215: 98.5043% ( 1) 00:08:22.171 18.708 - 18.806: 98.5399% ( 1) 00:08:22.171 18.806 - 18.905: 98.5755% ( 1) 00:08:22.171 19.003 - 19.102: 98.6467% ( 2) 00:08:22.171 19.102 - 19.200: 98.6823% ( 1) 00:08:22.171 19.200 - 19.298: 98.7179% ( 1) 00:08:22.171 19.495 - 19.594: 98.7536% ( 1) 00:08:22.171 20.086 - 20.185: 98.7892% ( 1) 00:08:22.171 21.662 - 21.760: 98.8248% ( 1) 00:08:22.171 22.154 - 22.252: 98.8604% ( 1) 00:08:22.171 22.252 - 22.351: 98.8960% ( 1) 00:08:22.171 23.237 - 23.335: 98.9672% ( 2) 00:08:22.171 23.335 - 23.434: 99.0028% ( 1) 00:08:22.171 23.532 - 23.631: 99.0741% ( 2) 00:08:22.171 23.631 - 23.729: 99.1097% ( 1) 00:08:22.171 23.729 - 23.828: 99.1453% ( 1) 00:08:22.171 23.926 - 24.025: 99.1809% ( 1) 00:08:22.171 24.025 - 24.123: 99.2165% ( 1) 00:08:22.171 24.222 - 24.320: 99.2521% ( 1) 00:08:22.171 24.320 - 24.418: 99.2877% ( 1) 00:08:22.171 25.994 - 26.191: 99.3234% ( 1) 00:08:22.171 26.782 - 26.978: 99.3590% ( 1) 00:08:22.171 27.766 - 27.963: 99.3946% ( 1) 00:08:22.171 28.751 - 28.948: 99.4302% ( 1) 00:08:22.171 32.689 - 32.886: 99.4658% ( 1) 00:08:22.171 34.068 - 34.265: 99.5014% ( 1) 00:08:22.171 37.809 - 38.006: 99.5370% ( 1) 00:08:22.171 42.732 - 42.929: 99.5726% ( 1) 00:08:22.171 61.046 - 61.440: 99.6083% ( 1) 00:08:22.171 70.105 - 70.498: 99.6439% ( 1) 00:08:22.171 77.194 - 77.588: 99.6795% ( 1) 00:08:22.171 81.526 - 81.920: 99.7151% ( 1) 00:08:22.171 83.102 - 83.495: 99.7507% ( 1) 00:08:22.171 126.818 - 127.606: 99.7863% ( 1) 00:08:22.171 182.745 - 183.532: 99.8219% ( 1) 00:08:22.171 186.683 - 187.471: 99.8575% ( 1) 00:08:22.171 286.720 - 288.295: 99.8932% ( 1) 00:08:22.171 296.172 - 297.748: 99.9288% ( 1) 00:08:22.171 385.969 - 387.545: 99.9644% ( 1) 00:08:22.171 563.988 - 567.138: 100.0000% ( 1) 00:08:22.171 00:08:22.171 00:08:22.171 real 0m1.236s 00:08:22.171 user 0m1.072s 00:08:22.171 sys 0m0.113s 00:08:22.171 22:53:01 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.171 ************************************ 00:08:22.171 END TEST nvme_overhead 00:08:22.171 ************************************ 00:08:22.171 22:53:01 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 22:53:01 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:22.171 22:53:01 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:22.171 22:53:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.171 22:53:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 ************************************ 00:08:22.171 START TEST nvme_arbitration 00:08:22.171 ************************************ 00:08:22.171 22:53:01 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:25.511 Initializing NVMe Controllers 00:08:25.511 Attached to 0000:00:10.0 00:08:25.511 Attached to 0000:00:11.0 00:08:25.511 Attached to 0000:00:13.0 00:08:25.511 Attached to 0000:00:12.0 00:08:25.511 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:25.511 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:25.511 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:25.511 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:25.511 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:25.511 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:25.511 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:25.511 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:25.511 Initialization complete. Launching workers. 00:08:25.511 Starting thread on core 1 with urgent priority queue 00:08:25.511 Starting thread on core 2 with urgent priority queue 00:08:25.511 Starting thread on core 3 with urgent priority queue 00:08:25.511 Starting thread on core 0 with urgent priority queue 00:08:25.511 QEMU NVMe Ctrl (12340 ) core 0: 3328.00 IO/s 30.05 secs/100000 ios 00:08:25.511 QEMU NVMe Ctrl (12342 ) core 0: 3340.00 IO/s 29.94 secs/100000 ios 00:08:25.511 QEMU NVMe Ctrl (12341 ) core 1: 3392.00 IO/s 29.48 secs/100000 ios 00:08:25.511 QEMU NVMe Ctrl (12342 ) core 1: 3392.00 IO/s 29.48 secs/100000 ios 00:08:25.511 QEMU NVMe Ctrl (12343 ) core 2: 3091.00 IO/s 32.35 secs/100000 ios 00:08:25.511 QEMU NVMe Ctrl (12342 ) core 3: 3118.67 IO/s 32.06 secs/100000 ios 00:08:25.511 ======================================================== 00:08:25.511 00:08:25.511 00:08:25.511 real 0m3.287s 00:08:25.511 user 0m9.036s 00:08:25.511 sys 0m0.146s 00:08:25.511 22:53:04 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.511 ************************************ 00:08:25.511 END TEST nvme_arbitration 00:08:25.511 ************************************ 00:08:25.511 22:53:04 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:25.511 22:53:04 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:25.511 22:53:04 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:25.511 22:53:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.511 22:53:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.511 ************************************ 00:08:25.511 START TEST nvme_single_aen 00:08:25.511 ************************************ 00:08:25.511 22:53:04 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:25.773 Asynchronous Event Request test 00:08:25.773 Attached to 0000:00:10.0 00:08:25.773 Attached to 0000:00:11.0 00:08:25.773 Attached to 0000:00:13.0 00:08:25.773 Attached to 0000:00:12.0 00:08:25.773 Reset controller to setup AER completions for this process 00:08:25.773 Registering asynchronous event callbacks... 00:08:25.773 Getting orig temperature thresholds of all controllers 00:08:25.773 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:25.773 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:25.773 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:25.773 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:25.773 Setting all controllers temperature threshold low to trigger AER 00:08:25.773 Waiting for all controllers temperature threshold to be set lower 00:08:25.773 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:25.773 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:25.773 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:25.773 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:25.773 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:25.773 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:25.773 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:25.773 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:25.773 Waiting for all controllers to trigger AER and reset threshold 00:08:25.773 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:25.773 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:25.773 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:25.773 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:25.773 Cleaning up... 00:08:25.773 00:08:25.773 real 0m0.247s 00:08:25.773 user 0m0.084s 00:08:25.773 sys 0m0.117s 00:08:25.773 22:53:04 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.773 ************************************ 00:08:25.773 END TEST nvme_single_aen 00:08:25.773 ************************************ 00:08:25.773 22:53:04 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:25.773 22:53:04 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:25.773 22:53:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.773 22:53:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.773 22:53:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.773 ************************************ 00:08:25.773 START TEST nvme_doorbell_aers 00:08:25.773 ************************************ 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:25.773 22:53:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:26.033 [2024-11-26 22:53:05.094321] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:36.040 Executing: test_write_invalid_db 00:08:36.040 Waiting for AER completion... 00:08:36.040 Failure: test_write_invalid_db 00:08:36.040 00:08:36.040 Executing: test_invalid_db_write_overflow_sq 00:08:36.040 Waiting for AER completion... 00:08:36.040 Failure: test_invalid_db_write_overflow_sq 00:08:36.040 00:08:36.040 Executing: test_invalid_db_write_overflow_cq 00:08:36.040 Waiting for AER completion... 00:08:36.040 Failure: test_invalid_db_write_overflow_cq 00:08:36.040 00:08:36.040 22:53:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:36.040 22:53:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:36.040 [2024-11-26 22:53:15.103877] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:46.006 Executing: test_write_invalid_db 00:08:46.006 Waiting for AER completion... 00:08:46.006 Failure: test_write_invalid_db 00:08:46.006 00:08:46.006 Executing: test_invalid_db_write_overflow_sq 00:08:46.006 Waiting for AER completion... 00:08:46.006 Failure: test_invalid_db_write_overflow_sq 00:08:46.006 00:08:46.006 Executing: test_invalid_db_write_overflow_cq 00:08:46.006 Waiting for AER completion... 00:08:46.006 Failure: test_invalid_db_write_overflow_cq 00:08:46.006 00:08:46.006 22:53:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:46.006 22:53:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:46.006 [2024-11-26 22:53:25.129153] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:55.997 Executing: test_write_invalid_db 00:08:55.997 Waiting for AER completion... 00:08:55.997 Failure: test_write_invalid_db 00:08:55.997 00:08:55.997 Executing: test_invalid_db_write_overflow_sq 00:08:55.997 Waiting for AER completion... 00:08:55.997 Failure: test_invalid_db_write_overflow_sq 00:08:55.997 00:08:55.998 Executing: test_invalid_db_write_overflow_cq 00:08:55.998 Waiting for AER completion... 00:08:55.998 Failure: test_invalid_db_write_overflow_cq 00:08:55.998 00:08:55.998 22:53:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:55.998 22:53:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:56.256 [2024-11-26 22:53:35.174356] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.367 Executing: test_write_invalid_db 00:09:06.367 Waiting for AER completion... 00:09:06.367 Failure: test_write_invalid_db 00:09:06.367 00:09:06.367 Executing: test_invalid_db_write_overflow_sq 00:09:06.367 Waiting for AER completion... 00:09:06.367 Failure: test_invalid_db_write_overflow_sq 00:09:06.367 00:09:06.367 Executing: test_invalid_db_write_overflow_cq 00:09:06.367 Waiting for AER completion... 00:09:06.367 Failure: test_invalid_db_write_overflow_cq 00:09:06.367 00:09:06.367 00:09:06.367 real 0m40.205s 00:09:06.367 user 0m34.195s 00:09:06.367 sys 0m5.596s 00:09:06.367 22:53:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.367 22:53:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:06.367 ************************************ 00:09:06.367 END TEST nvme_doorbell_aers 00:09:06.367 ************************************ 00:09:06.367 22:53:45 nvme -- nvme/nvme.sh@97 -- # uname 00:09:06.368 22:53:45 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:06.368 22:53:45 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:06.368 22:53:45 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:06.368 22:53:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.368 22:53:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.368 ************************************ 00:09:06.368 START TEST nvme_multi_aen 00:09:06.368 ************************************ 00:09:06.368 22:53:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:06.368 [2024-11-26 22:53:45.205105] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.205177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.205191] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.206739] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.206768] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.206777] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.207956] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.207981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.207992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.209099] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.209124] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 [2024-11-26 22:53:45.209132] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:09:06.368 Child process pid: 77121 00:09:06.368 [Child] Asynchronous Event Request test 00:09:06.368 [Child] Attached to 0000:00:10.0 00:09:06.368 [Child] Attached to 0000:00:11.0 00:09:06.368 [Child] Attached to 0000:00:13.0 00:09:06.368 [Child] Attached to 0000:00:12.0 00:09:06.368 [Child] Registering asynchronous event callbacks... 00:09:06.368 [Child] Getting orig temperature thresholds of all controllers 00:09:06.368 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:06.368 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 [Child] Cleaning up... 00:09:06.368 Asynchronous Event Request test 00:09:06.368 Attached to 0000:00:10.0 00:09:06.368 Attached to 0000:00:11.0 00:09:06.368 Attached to 0000:00:13.0 00:09:06.368 Attached to 0000:00:12.0 00:09:06.368 Reset controller to setup AER completions for this process 00:09:06.368 Registering asynchronous event callbacks... 00:09:06.368 Getting orig temperature thresholds of all controllers 00:09:06.368 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:06.368 Setting all controllers temperature threshold low to trigger AER 00:09:06.368 Waiting for all controllers temperature threshold to be set lower 00:09:06.368 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:06.368 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:06.368 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:06.368 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:06.368 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:06.368 Waiting for all controllers to trigger AER and reset threshold 00:09:06.368 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.368 Cleaning up... 00:09:06.368 00:09:06.368 real 0m0.416s 00:09:06.368 user 0m0.125s 00:09:06.368 sys 0m0.183s 00:09:06.368 22:53:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.368 22:53:45 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:06.368 ************************************ 00:09:06.368 END TEST nvme_multi_aen 00:09:06.368 ************************************ 00:09:06.628 22:53:45 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:06.628 22:53:45 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:06.628 22:53:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.628 22:53:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.628 ************************************ 00:09:06.628 START TEST nvme_startup 00:09:06.628 ************************************ 00:09:06.628 22:53:45 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:06.628 Initializing NVMe Controllers 00:09:06.628 Attached to 0000:00:10.0 00:09:06.628 Attached to 0000:00:11.0 00:09:06.628 Attached to 0000:00:13.0 00:09:06.628 Attached to 0000:00:12.0 00:09:06.628 Initialization complete. 00:09:06.628 Time used:146405.953 (us). 00:09:06.628 00:09:06.628 real 0m0.206s 00:09:06.628 user 0m0.063s 00:09:06.628 sys 0m0.099s 00:09:06.628 22:53:45 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.628 ************************************ 00:09:06.628 END TEST nvme_startup 00:09:06.628 22:53:45 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:06.628 ************************************ 00:09:06.628 22:53:45 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:06.628 22:53:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.628 22:53:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.628 22:53:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.628 ************************************ 00:09:06.628 START TEST nvme_multi_secondary 00:09:06.628 ************************************ 00:09:06.628 22:53:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:06.628 22:53:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77172 00:09:06.628 22:53:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77173 00:09:06.628 22:53:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:06.886 22:53:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:06.886 22:53:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:10.184 Initializing NVMe Controllers 00:09:10.184 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:10.184 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:10.184 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:10.184 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:10.184 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:10.184 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:10.184 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:10.184 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:10.184 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:10.184 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:10.184 Initialization complete. Launching workers. 00:09:10.184 ======================================================== 00:09:10.184 Latency(us) 00:09:10.184 Device Information : IOPS MiB/s Average min max 00:09:10.184 PCIE (0000:00:10.0) NSID 1 from core 2: 2522.56 9.85 6341.23 1138.56 30528.58 00:09:10.185 PCIE (0000:00:11.0) NSID 1 from core 2: 2522.56 9.85 6342.10 1189.71 22731.25 00:09:10.185 PCIE (0000:00:13.0) NSID 1 from core 2: 2522.56 9.85 6350.85 1139.84 24216.07 00:09:10.185 PCIE (0000:00:12.0) NSID 1 from core 2: 2522.56 9.85 6350.27 1169.67 25211.48 00:09:10.185 PCIE (0000:00:12.0) NSID 2 from core 2: 2522.56 9.85 6350.73 1179.19 25819.51 00:09:10.185 PCIE (0000:00:12.0) NSID 3 from core 2: 2522.56 9.85 6350.65 1035.47 24421.84 00:09:10.185 ======================================================== 00:09:10.185 Total : 15135.35 59.12 6347.64 1035.47 30528.58 00:09:10.185 00:09:10.185 Initializing NVMe Controllers 00:09:10.185 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:10.185 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:10.185 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:10.185 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:10.185 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:10.185 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:10.185 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:10.185 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:10.185 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:10.185 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:10.185 Initialization complete. Launching workers. 00:09:10.185 ======================================================== 00:09:10.185 Latency(us) 00:09:10.185 Device Information : IOPS MiB/s Average min max 00:09:10.185 PCIE (0000:00:10.0) NSID 1 from core 1: 6328.95 24.72 2526.61 836.81 8177.55 00:09:10.185 PCIE (0000:00:11.0) NSID 1 from core 1: 6328.95 24.72 2527.59 855.31 8709.37 00:09:10.185 PCIE (0000:00:13.0) NSID 1 from core 1: 6328.95 24.72 2527.57 848.08 8468.60 00:09:10.185 PCIE (0000:00:12.0) NSID 1 from core 1: 6328.95 24.72 2527.51 854.80 8400.94 00:09:10.185 PCIE (0000:00:12.0) NSID 2 from core 1: 6328.95 24.72 2527.47 857.81 8381.03 00:09:10.186 PCIE (0000:00:12.0) NSID 3 from core 1: 6328.95 24.72 2527.52 854.11 8689.97 00:09:10.186 ======================================================== 00:09:10.186 Total : 37973.70 148.33 2527.38 836.81 8709.37 00:09:10.186 00:09:10.186 22:53:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77172 00:09:12.100 Initializing NVMe Controllers 00:09:12.100 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:12.100 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:12.100 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:12.100 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:12.100 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:12.100 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:12.100 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:12.100 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:12.100 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:12.100 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:12.100 Initialization complete. Launching workers. 00:09:12.100 ======================================================== 00:09:12.100 Latency(us) 00:09:12.100 Device Information : IOPS MiB/s Average min max 00:09:12.100 PCIE (0000:00:10.0) NSID 1 from core 0: 9102.76 35.56 1756.44 633.28 13148.98 00:09:12.100 PCIE (0000:00:11.0) NSID 1 from core 0: 9102.16 35.56 1757.45 646.63 13085.97 00:09:12.100 PCIE (0000:00:13.0) NSID 1 from core 0: 9103.56 35.56 1757.15 588.76 9753.76 00:09:12.100 PCIE (0000:00:12.0) NSID 1 from core 0: 9101.96 35.55 1757.40 596.72 10848.89 00:09:12.100 PCIE (0000:00:12.0) NSID 2 from core 0: 9102.96 35.56 1757.16 552.80 11697.06 00:09:12.100 PCIE (0000:00:12.0) NSID 3 from core 0: 9102.16 35.56 1757.28 482.30 12121.64 00:09:12.100 ======================================================== 00:09:12.100 Total : 54615.53 213.34 1757.15 482.30 13148.98 00:09:12.100 00:09:12.100 22:53:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77173 00:09:12.100 22:53:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77242 00:09:12.100 22:53:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:12.100 22:53:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:12.100 22:53:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77243 00:09:12.100 22:53:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:15.404 Initializing NVMe Controllers 00:09:15.404 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.404 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.404 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.404 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.404 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:15.404 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:15.404 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:15.404 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:15.404 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:15.404 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:15.404 Initialization complete. Launching workers. 00:09:15.404 ======================================================== 00:09:15.404 Latency(us) 00:09:15.404 Device Information : IOPS MiB/s Average min max 00:09:15.404 PCIE (0000:00:10.0) NSID 1 from core 1: 4535.83 17.72 3525.82 829.90 10575.66 00:09:15.404 PCIE (0000:00:11.0) NSID 1 from core 1: 4535.83 17.72 3528.27 853.27 10200.90 00:09:15.404 PCIE (0000:00:13.0) NSID 1 from core 1: 4535.83 17.72 3528.21 872.62 9370.25 00:09:15.404 PCIE (0000:00:12.0) NSID 1 from core 1: 4535.83 17.72 3528.30 887.71 8526.50 00:09:15.404 PCIE (0000:00:12.0) NSID 2 from core 1: 4535.83 17.72 3528.24 883.21 9984.16 00:09:15.404 PCIE (0000:00:12.0) NSID 3 from core 1: 4535.83 17.72 3528.15 852.48 10650.46 00:09:15.404 ======================================================== 00:09:15.404 Total : 27215.01 106.31 3527.83 829.90 10650.46 00:09:15.404 00:09:15.404 Initializing NVMe Controllers 00:09:15.404 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.404 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.404 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.404 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.404 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:15.404 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:15.404 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:15.404 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:15.404 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:15.404 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:15.404 Initialization complete. Launching workers. 00:09:15.404 ======================================================== 00:09:15.404 Latency(us) 00:09:15.404 Device Information : IOPS MiB/s Average min max 00:09:15.404 PCIE (0000:00:10.0) NSID 1 from core 0: 4542.03 17.74 3520.96 937.58 8965.02 00:09:15.404 PCIE (0000:00:11.0) NSID 1 from core 0: 4542.03 17.74 3522.58 956.44 9424.67 00:09:15.404 PCIE (0000:00:13.0) NSID 1 from core 0: 4542.03 17.74 3522.45 953.94 9262.47 00:09:15.404 PCIE (0000:00:12.0) NSID 1 from core 0: 4542.03 17.74 3522.34 953.38 9246.68 00:09:15.404 PCIE (0000:00:12.0) NSID 2 from core 0: 4542.03 17.74 3522.23 971.45 9302.29 00:09:15.404 PCIE (0000:00:12.0) NSID 3 from core 0: 4542.03 17.74 3522.41 971.96 9120.52 00:09:15.404 ======================================================== 00:09:15.404 Total : 27252.18 106.45 3522.16 937.58 9424.67 00:09:15.405 00:09:17.319 Initializing NVMe Controllers 00:09:17.319 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:17.319 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:17.319 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:17.319 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:17.320 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:17.320 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:17.320 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:17.320 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:17.320 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:17.320 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:17.320 Initialization complete. Launching workers. 00:09:17.320 ======================================================== 00:09:17.320 Latency(us) 00:09:17.320 Device Information : IOPS MiB/s Average min max 00:09:17.320 PCIE (0000:00:10.0) NSID 1 from core 2: 2774.27 10.84 5765.74 884.25 17953.08 00:09:17.320 PCIE (0000:00:11.0) NSID 1 from core 2: 2774.27 10.84 5767.38 902.84 19276.65 00:09:17.320 PCIE (0000:00:13.0) NSID 1 from core 2: 2774.27 10.84 5767.24 907.91 18324.52 00:09:17.320 PCIE (0000:00:12.0) NSID 1 from core 2: 2774.27 10.84 5766.80 891.72 18797.43 00:09:17.320 PCIE (0000:00:12.0) NSID 2 from core 2: 2774.27 10.84 5766.95 903.05 19191.51 00:09:17.320 PCIE (0000:00:12.0) NSID 3 from core 2: 2774.27 10.84 5766.78 903.00 19149.73 00:09:17.320 ======================================================== 00:09:17.320 Total : 16645.61 65.02 5766.82 884.25 19276.65 00:09:17.320 00:09:17.320 22:53:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77242 00:09:17.320 22:53:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77243 00:09:17.320 00:09:17.320 real 0m10.676s 00:09:17.320 user 0m18.335s 00:09:17.320 sys 0m0.665s 00:09:17.320 ************************************ 00:09:17.320 END TEST nvme_multi_secondary 00:09:17.320 ************************************ 00:09:17.320 22:53:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.320 22:53:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:17.581 22:53:56 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:17.581 22:53:56 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:17.581 22:53:56 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76186 ]] 00:09:17.581 22:53:56 nvme -- common/autotest_common.sh@1094 -- # kill 76186 00:09:17.581 22:53:56 nvme -- common/autotest_common.sh@1095 -- # wait 76186 00:09:17.581 [2024-11-26 22:53:56.472197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.472340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.472377] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.472414] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.473223] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.473320] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.473365] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.473415] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.474167] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.474235] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.474286] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.581 [2024-11-26 22:53:56.474354] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.582 [2024-11-26 22:53:56.475096] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.582 [2024-11-26 22:53:56.475190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.582 [2024-11-26 22:53:56.475269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.582 [2024-11-26 22:53:56.475339] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77120) is not found. Dropping the request. 00:09:17.582 22:53:56 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:17.582 22:53:56 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:17.582 22:53:56 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:17.582 22:53:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.582 22:53:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.582 22:53:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.582 ************************************ 00:09:17.582 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:17.582 ************************************ 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:17.582 * Looking for test storage... 00:09:17.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:17.582 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.863 --rc genhtml_branch_coverage=1 00:09:17.863 --rc genhtml_function_coverage=1 00:09:17.863 --rc genhtml_legend=1 00:09:17.863 --rc geninfo_all_blocks=1 00:09:17.863 --rc geninfo_unexecuted_blocks=1 00:09:17.863 00:09:17.863 ' 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.863 --rc genhtml_branch_coverage=1 00:09:17.863 --rc genhtml_function_coverage=1 00:09:17.863 --rc genhtml_legend=1 00:09:17.863 --rc geninfo_all_blocks=1 00:09:17.863 --rc geninfo_unexecuted_blocks=1 00:09:17.863 00:09:17.863 ' 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.863 --rc genhtml_branch_coverage=1 00:09:17.863 --rc genhtml_function_coverage=1 00:09:17.863 --rc genhtml_legend=1 00:09:17.863 --rc geninfo_all_blocks=1 00:09:17.863 --rc geninfo_unexecuted_blocks=1 00:09:17.863 00:09:17.863 ' 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.863 --rc genhtml_branch_coverage=1 00:09:17.863 --rc genhtml_function_coverage=1 00:09:17.863 --rc genhtml_legend=1 00:09:17.863 --rc geninfo_all_blocks=1 00:09:17.863 --rc geninfo_unexecuted_blocks=1 00:09:17.863 00:09:17.863 ' 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:17.863 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77409 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77409 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77409 ']' 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.864 22:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:17.864 [2024-11-26 22:53:56.851058] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:09:17.864 [2024-11-26 22:53:56.851172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77409 ] 00:09:18.126 [2024-11-26 22:53:56.994389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:18.126 [2024-11-26 22:53:57.022329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.126 [2024-11-26 22:53:57.050245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.126 [2024-11-26 22:53:57.050480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.126 [2024-11-26 22:53:57.050671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.126 [2024-11-26 22:53:57.050732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:18.700 nvme0n1 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_eDvTi.txt 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:18.700 true 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732661637 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77432 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:18.700 22:53:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.249 [2024-11-26 22:53:59.780482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:21.249 [2024-11-26 22:53:59.780790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:21.249 [2024-11-26 22:53:59.780816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:21.249 [2024-11-26 22:53:59.780831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:21.249 [2024-11-26 22:53:59.784219] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.249 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77432 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77432 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77432 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:21.249 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_eDvTi.txt 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_eDvTi.txt 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77409 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77409 ']' 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77409 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77409 00:09:21.250 killing process with pid 77409 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77409' 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77409 00:09:21.250 22:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77409 00:09:21.250 22:54:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:21.250 22:54:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:21.250 00:09:21.250 real 0m3.665s 00:09:21.250 user 0m12.924s 00:09:21.250 sys 0m0.548s 00:09:21.250 22:54:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.250 ************************************ 00:09:21.250 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:21.250 22:54:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.250 ************************************ 00:09:21.250 22:54:00 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:21.250 22:54:00 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:21.250 22:54:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.250 22:54:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.250 22:54:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.250 ************************************ 00:09:21.250 START TEST nvme_fio 00:09:21.250 ************************************ 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:21.250 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:21.250 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:21.513 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:21.513 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:21.774 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:21.775 22:54:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:21.775 22:54:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:22.035 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:22.035 fio-3.35 00:09:22.035 Starting 1 thread 00:09:27.320 00:09:27.320 test: (groupid=0, jobs=1): err= 0: pid=77555: Tue Nov 26 22:54:06 2024 00:09:27.320 read: IOPS=18.8k, BW=73.4MiB/s (77.0MB/s)(147MiB/2001msec) 00:09:27.320 slat (nsec): min=4268, max=62617, avg=5940.21, stdev=2790.57 00:09:27.320 clat (usec): min=218, max=11360, avg=3379.77, stdev=1094.52 00:09:27.320 lat (usec): min=223, max=11413, avg=3385.71, stdev=1095.75 00:09:27.320 clat percentiles (usec): 00:09:27.320 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2606], 00:09:27.320 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 3130], 00:09:27.320 | 70.00th=[ 3458], 80.00th=[ 4146], 90.00th=[ 5080], 95.00th=[ 5800], 00:09:27.320 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8455], 99.95th=[ 9110], 00:09:27.320 | 99.99th=[11207] 00:09:27.320 bw ( KiB/s): min=71096, max=79584, per=99.80%, avg=75058.67, stdev=4271.88, samples=3 00:09:27.320 iops : min=17774, max=19896, avg=18764.67, stdev=1067.97, samples=3 00:09:27.320 write: IOPS=18.8k, BW=73.5MiB/s (77.0MB/s)(147MiB/2001msec); 0 zone resets 00:09:27.320 slat (nsec): min=4447, max=68564, avg=6170.52, stdev=2849.47 00:09:27.320 clat (usec): min=264, max=11282, avg=3403.64, stdev=1085.39 00:09:27.320 lat (usec): min=270, max=11297, avg=3409.82, stdev=1086.59 00:09:27.320 clat percentiles (usec): 00:09:27.320 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:09:27.320 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3163], 00:09:27.320 | 70.00th=[ 3490], 80.00th=[ 4178], 90.00th=[ 5080], 95.00th=[ 5800], 00:09:27.320 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[ 9110], 00:09:27.320 | 99.99th=[11207] 00:09:27.320 bw ( KiB/s): min=71024, max=79488, per=99.77%, avg=75050.67, stdev=4246.92, samples=3 00:09:27.320 iops : min=17756, max=19872, avg=18762.67, stdev=1061.73, samples=3 00:09:27.320 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:27.320 lat (msec) : 2=0.61%, 4=77.60%, 10=21.73%, 20=0.02% 00:09:27.320 cpu : usr=98.95%, sys=0.10%, ctx=4, majf=0, minf=623 00:09:27.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:27.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.320 issued rwts: total=37623,37631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.320 00:09:27.320 Run status group 0 (all jobs): 00:09:27.320 READ: bw=73.4MiB/s (77.0MB/s), 73.4MiB/s-73.4MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:27.320 WRITE: bw=73.5MiB/s (77.0MB/s), 73.5MiB/s-73.5MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:27.320 ----------------------------------------------------- 00:09:27.320 Suppressions used: 00:09:27.320 count bytes template 00:09:27.320 1 32 /usr/src/fio/parse.c 00:09:27.320 1 8 libtcmalloc_minimal.so 00:09:27.320 ----------------------------------------------------- 00:09:27.320 00:09:27.320 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:27.320 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:27.320 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:27.320 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:27.578 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:27.578 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:27.836 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:27.836 22:54:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:27.836 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:27.837 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:27.837 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:27.837 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:27.837 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:27.837 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:27.837 22:54:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:28.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:28.095 fio-3.35 00:09:28.095 Starting 1 thread 00:09:34.729 00:09:34.729 test: (groupid=0, jobs=1): err= 0: pid=77617: Tue Nov 26 22:54:13 2024 00:09:34.729 read: IOPS=21.5k, BW=83.8MiB/s (87.9MB/s)(168MiB/2001msec) 00:09:34.729 slat (usec): min=4, max=634, avg= 5.75, stdev= 3.98 00:09:34.729 clat (usec): min=220, max=10827, avg=2981.31, stdev=893.58 00:09:34.729 lat (usec): min=225, max=10840, avg=2987.06, stdev=894.90 00:09:34.729 clat percentiles (usec): 00:09:34.729 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:34.729 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:34.729 | 70.00th=[ 2802], 80.00th=[ 3064], 90.00th=[ 3916], 95.00th=[ 5014], 00:09:34.729 | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[10421], 00:09:34.729 | 99.99th=[10552] 00:09:34.729 bw ( KiB/s): min=74968, max=91912, per=99.96%, avg=85818.67, stdev=9420.67, samples=3 00:09:34.729 iops : min=18742, max=22978, avg=21454.67, stdev=2355.17, samples=3 00:09:34.729 write: IOPS=21.3k, BW=83.2MiB/s (87.2MB/s)(166MiB/2001msec); 0 zone resets 00:09:34.729 slat (usec): min=4, max=585, avg= 6.05, stdev= 4.53 00:09:34.729 clat (usec): min=272, max=10766, avg=2985.02, stdev=892.16 00:09:34.729 lat (usec): min=277, max=10778, avg=2991.06, stdev=893.50 00:09:34.729 clat percentiles (usec): 00:09:34.729 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:34.729 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:34.729 | 70.00th=[ 2802], 80.00th=[ 3064], 90.00th=[ 3916], 95.00th=[ 5014], 00:09:34.729 | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[10421], 00:09:34.729 | 99.99th=[10683] 00:09:34.729 bw ( KiB/s): min=76648, max=91312, per=100.00%, avg=86010.67, stdev=8131.98, samples=3 00:09:34.729 iops : min=19162, max=22828, avg=21502.67, stdev=2032.99, samples=3 00:09:34.729 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:34.729 lat (msec) : 2=0.51%, 4=89.99%, 10=9.38%, 20=0.08% 00:09:34.729 cpu : usr=98.65%, sys=0.20%, ctx=24, majf=0, minf=623 00:09:34.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:34.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.729 issued rwts: total=42948,42614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.729 00:09:34.729 Run status group 0 (all jobs): 00:09:34.729 READ: bw=83.8MiB/s (87.9MB/s), 83.8MiB/s-83.8MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:34.729 WRITE: bw=83.2MiB/s (87.2MB/s), 83.2MiB/s-83.2MiB/s (87.2MB/s-87.2MB/s), io=166MiB (175MB), run=2001-2001msec 00:09:34.729 ----------------------------------------------------- 00:09:34.729 Suppressions used: 00:09:34.729 count bytes template 00:09:34.729 1 32 /usr/src/fio/parse.c 00:09:34.729 1 8 libtcmalloc_minimal.so 00:09:34.729 ----------------------------------------------------- 00:09:34.729 00:09:34.729 22:54:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:34.729 22:54:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:34.729 22:54:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:34.729 22:54:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:34.729 22:54:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:34.729 22:54:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:34.991 22:54:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:34.991 22:54:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:34.991 22:54:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:35.253 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:35.253 fio-3.35 00:09:35.253 Starting 1 thread 00:09:41.858 00:09:41.858 test: (groupid=0, jobs=1): err= 0: pid=77682: Tue Nov 26 22:54:20 2024 00:09:41.859 read: IOPS=19.4k, BW=75.8MiB/s (79.4MB/s)(152MiB/2001msec) 00:09:41.859 slat (nsec): min=4762, max=65232, avg=6061.50, stdev=2697.13 00:09:41.859 clat (usec): min=327, max=11446, avg=3281.96, stdev=1133.55 00:09:41.859 lat (usec): min=334, max=11453, avg=3288.02, stdev=1134.79 00:09:41.859 clat percentiles (usec): 00:09:41.859 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:41.859 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2933], 00:09:41.859 | 70.00th=[ 3228], 80.00th=[ 3916], 90.00th=[ 5080], 95.00th=[ 5866], 00:09:41.859 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8848], 99.95th=[10159], 00:09:41.859 | 99.99th=[10814] 00:09:41.859 bw ( KiB/s): min=74208, max=87280, per=100.00%, avg=79797.33, stdev=6738.53, samples=3 00:09:41.859 iops : min=18552, max=21820, avg=19949.33, stdev=1684.63, samples=3 00:09:41.859 write: IOPS=19.4k, BW=75.6MiB/s (79.3MB/s)(151MiB/2001msec); 0 zone resets 00:09:41.859 slat (nsec): min=4843, max=57522, avg=6281.57, stdev=2635.25 00:09:41.859 clat (usec): min=340, max=10939, avg=3304.29, stdev=1143.76 00:09:41.859 lat (usec): min=348, max=10945, avg=3310.57, stdev=1144.97 00:09:41.859 clat percentiles (usec): 00:09:41.859 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:09:41.859 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:41.859 | 70.00th=[ 3228], 80.00th=[ 3949], 90.00th=[ 5145], 95.00th=[ 5866], 00:09:41.859 | 99.00th=[ 7242], 99.50th=[ 7635], 99.90th=[ 8848], 99.95th=[ 9765], 00:09:41.859 | 99.99th=[10683] 00:09:41.859 bw ( KiB/s): min=74296, max=87240, per=100.00%, avg=79800.00, stdev=6685.65, samples=3 00:09:41.859 iops : min=18574, max=21810, avg=19950.00, stdev=1671.41, samples=3 00:09:41.859 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:41.859 lat (msec) : 2=0.59%, 4=80.05%, 10=19.27%, 20=0.05% 00:09:41.859 cpu : usr=99.00%, sys=0.10%, ctx=3, majf=0, minf=624 00:09:41.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:41.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.859 issued rwts: total=38804,38725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.859 00:09:41.859 Run status group 0 (all jobs): 00:09:41.859 READ: bw=75.8MiB/s (79.4MB/s), 75.8MiB/s-75.8MiB/s (79.4MB/s-79.4MB/s), io=152MiB (159MB), run=2001-2001msec 00:09:41.859 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=151MiB (159MB), run=2001-2001msec 00:09:41.859 ----------------------------------------------------- 00:09:41.859 Suppressions used: 00:09:41.859 count bytes template 00:09:41.859 1 32 /usr/src/fio/parse.c 00:09:41.859 1 8 libtcmalloc_minimal.so 00:09:41.859 ----------------------------------------------------- 00:09:41.859 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:41.859 22:54:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:41.859 22:54:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:41.859 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:41.859 fio-3.35 00:09:41.859 Starting 1 thread 00:09:49.967 00:09:49.967 test: (groupid=0, jobs=1): err= 0: pid=77744: Tue Nov 26 22:54:27 2024 00:09:49.967 read: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)(160MiB/2001msec) 00:09:49.967 slat (nsec): min=3958, max=80865, avg=6086.87, stdev=2677.60 00:09:49.967 clat (usec): min=217, max=12399, avg=3107.00, stdev=985.95 00:09:49.967 lat (usec): min=222, max=12480, avg=3113.08, stdev=987.58 00:09:49.967 clat percentiles (usec): 00:09:49.967 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:49.967 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2737], 00:09:49.967 | 70.00th=[ 2966], 80.00th=[ 3458], 90.00th=[ 4359], 95.00th=[ 5407], 00:09:49.967 | 99.00th=[ 6915], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 9241], 00:09:49.967 | 99.99th=[11994] 00:09:49.967 bw ( KiB/s): min=78464, max=83880, per=98.17%, avg=80453.33, stdev=2980.39, samples=3 00:09:49.967 iops : min=19616, max=20970, avg=20113.33, stdev=745.10, samples=3 00:09:49.967 write: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(160MiB/2001msec); 0 zone resets 00:09:49.967 slat (nsec): min=4118, max=65455, avg=6407.87, stdev=2764.57 00:09:49.967 clat (usec): min=226, max=12174, avg=3123.37, stdev=1004.33 00:09:49.967 lat (usec): min=231, max=12190, avg=3129.78, stdev=1005.99 00:09:49.967 clat percentiles (usec): 00:09:49.967 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:49.967 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:49.967 | 70.00th=[ 2999], 80.00th=[ 3490], 90.00th=[ 4424], 95.00th=[ 5538], 00:09:49.967 | 99.00th=[ 7111], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 9503], 00:09:49.967 | 99.99th=[11600] 00:09:49.967 bw ( KiB/s): min=78288, max=84288, per=98.43%, avg=80477.33, stdev=3312.33, samples=3 00:09:49.967 iops : min=19572, max=21072, avg=20119.33, stdev=828.08, samples=3 00:09:49.967 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:49.967 lat (msec) : 2=0.45%, 4=86.31%, 10=13.16%, 20=0.04% 00:09:49.967 cpu : usr=99.05%, sys=0.15%, ctx=11, majf=0, minf=623 00:09:49.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:49.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.967 issued rwts: total=40998,40900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.967 00:09:49.967 Run status group 0 (all jobs): 00:09:49.967 READ: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:49.967 WRITE: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:49.967 ----------------------------------------------------- 00:09:49.967 Suppressions used: 00:09:49.967 count bytes template 00:09:49.967 1 32 /usr/src/fio/parse.c 00:09:49.967 1 8 libtcmalloc_minimal.so 00:09:49.967 ----------------------------------------------------- 00:09:49.967 00:09:49.967 22:54:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:49.967 22:54:27 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:49.967 00:09:49.967 real 0m27.498s 00:09:49.967 user 0m19.286s 00:09:49.967 sys 0m13.047s 00:09:49.967 22:54:27 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.967 ************************************ 00:09:49.967 END TEST nvme_fio 00:09:49.967 ************************************ 00:09:49.967 22:54:27 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:49.967 ************************************ 00:09:49.967 END TEST nvme 00:09:49.967 ************************************ 00:09:49.967 00:09:49.967 real 1m37.274s 00:09:49.967 user 3m38.636s 00:09:49.967 sys 0m24.252s 00:09:49.967 22:54:27 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.967 22:54:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:49.967 22:54:27 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:49.967 22:54:27 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:49.967 22:54:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.967 22:54:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.967 22:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:49.967 ************************************ 00:09:49.967 START TEST nvme_scc 00:09:49.967 ************************************ 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:49.967 * Looking for test storage... 00:09:49.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.967 22:54:27 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.967 --rc genhtml_branch_coverage=1 00:09:49.967 --rc genhtml_function_coverage=1 00:09:49.967 --rc genhtml_legend=1 00:09:49.967 --rc geninfo_all_blocks=1 00:09:49.967 --rc geninfo_unexecuted_blocks=1 00:09:49.967 00:09:49.967 ' 00:09:49.967 22:54:27 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.967 --rc genhtml_branch_coverage=1 00:09:49.967 --rc genhtml_function_coverage=1 00:09:49.967 --rc genhtml_legend=1 00:09:49.968 --rc geninfo_all_blocks=1 00:09:49.968 --rc geninfo_unexecuted_blocks=1 00:09:49.968 00:09:49.968 ' 00:09:49.968 22:54:27 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.968 --rc genhtml_branch_coverage=1 00:09:49.968 --rc genhtml_function_coverage=1 00:09:49.968 --rc genhtml_legend=1 00:09:49.968 --rc geninfo_all_blocks=1 00:09:49.968 --rc geninfo_unexecuted_blocks=1 00:09:49.968 00:09:49.968 ' 00:09:49.968 22:54:27 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.968 --rc genhtml_branch_coverage=1 00:09:49.968 --rc genhtml_function_coverage=1 00:09:49.968 --rc genhtml_legend=1 00:09:49.968 --rc geninfo_all_blocks=1 00:09:49.968 --rc geninfo_unexecuted_blocks=1 00:09:49.968 00:09:49.968 ' 00:09:49.968 22:54:27 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:49.968 22:54:27 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.968 22:54:28 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.968 22:54:28 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.968 22:54:28 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.968 22:54:28 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.968 22:54:28 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.968 22:54:28 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.968 22:54:28 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.968 22:54:28 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:49.968 22:54:28 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:49.968 22:54:28 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:49.968 22:54:28 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.968 22:54:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:49.968 22:54:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:49.968 22:54:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:49.968 22:54:28 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:49.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.968 Waiting for block devices as requested 00:09:49.968 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.968 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.968 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.968 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.253 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:55.253 22:54:33 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:55.253 22:54:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.253 22:54:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:55.253 22:54:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.253 22:54:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.253 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.254 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:55.255 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.256 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.257 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.258 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.259 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:55.260 22:54:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.260 22:54:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:55.260 22:54:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.260 22:54:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.260 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.261 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.262 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:55.263 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:55.264 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.265 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:55.266 22:54:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.266 22:54:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:55.266 22:54:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.266 22:54:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:55.266 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.267 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:55.268 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.269 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.270 22:54:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.271 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.272 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.273 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:55.274 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.275 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.276 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:55.277 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:55.278 22:54:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:55.278 22:54:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.278 22:54:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:55.278 22:54:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.279 22:54:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.279 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.280 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:55.281 22:54:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:55.281 22:54:34 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:55.282 22:54:34 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:55.282 22:54:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:55.282 22:54:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:55.282 22:54:34 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:55.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:56.106 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.106 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.106 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.106 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.106 22:54:35 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:56.106 22:54:35 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:56.106 22:54:35 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.106 22:54:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:56.106 ************************************ 00:09:56.106 START TEST nvme_simple_copy 00:09:56.106 ************************************ 00:09:56.106 22:54:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:56.364 Initializing NVMe Controllers 00:09:56.364 Attaching to 0000:00:10.0 00:09:56.364 Controller supports SCC. Attached to 0000:00:10.0 00:09:56.364 Namespace ID: 1 size: 6GB 00:09:56.364 Initialization complete. 00:09:56.364 00:09:56.364 Controller QEMU NVMe Ctrl (12340 ) 00:09:56.364 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:56.364 Namespace Block Size:4096 00:09:56.364 Writing LBAs 0 to 63 with Random Data 00:09:56.364 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:56.364 LBAs matching Written Data: 64 00:09:56.364 00:09:56.364 real 0m0.257s 00:09:56.364 user 0m0.099s 00:09:56.364 sys 0m0.056s 00:09:56.364 22:54:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.364 ************************************ 00:09:56.364 END TEST nvme_simple_copy 00:09:56.364 ************************************ 00:09:56.364 22:54:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:56.364 ************************************ 00:09:56.364 END TEST nvme_scc 00:09:56.364 ************************************ 00:09:56.364 00:09:56.364 real 0m7.575s 00:09:56.364 user 0m1.102s 00:09:56.364 sys 0m1.367s 00:09:56.364 22:54:35 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.364 22:54:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:56.364 22:54:35 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:56.364 22:54:35 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:56.364 22:54:35 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:56.364 22:54:35 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:56.364 22:54:35 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:56.364 22:54:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.364 22:54:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.364 22:54:35 -- common/autotest_common.sh@10 -- # set +x 00:09:56.364 ************************************ 00:09:56.364 START TEST nvme_fdp 00:09:56.364 ************************************ 00:09:56.364 22:54:35 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:56.622 * Looking for test storage... 00:09:56.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:56.622 22:54:35 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.622 22:54:35 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.622 22:54:35 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.622 22:54:35 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.622 22:54:35 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:56.622 22:54:35 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.622 22:54:35 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.622 --rc genhtml_branch_coverage=1 00:09:56.622 --rc genhtml_function_coverage=1 00:09:56.622 --rc genhtml_legend=1 00:09:56.622 --rc geninfo_all_blocks=1 00:09:56.623 --rc geninfo_unexecuted_blocks=1 00:09:56.623 00:09:56.623 ' 00:09:56.623 22:54:35 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.623 --rc genhtml_branch_coverage=1 00:09:56.623 --rc genhtml_function_coverage=1 00:09:56.623 --rc genhtml_legend=1 00:09:56.623 --rc geninfo_all_blocks=1 00:09:56.623 --rc geninfo_unexecuted_blocks=1 00:09:56.623 00:09:56.623 ' 00:09:56.623 22:54:35 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.623 --rc genhtml_branch_coverage=1 00:09:56.623 --rc genhtml_function_coverage=1 00:09:56.623 --rc genhtml_legend=1 00:09:56.623 --rc geninfo_all_blocks=1 00:09:56.623 --rc geninfo_unexecuted_blocks=1 00:09:56.623 00:09:56.623 ' 00:09:56.623 22:54:35 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.623 --rc genhtml_branch_coverage=1 00:09:56.623 --rc genhtml_function_coverage=1 00:09:56.623 --rc genhtml_legend=1 00:09:56.623 --rc geninfo_all_blocks=1 00:09:56.623 --rc geninfo_unexecuted_blocks=1 00:09:56.623 00:09:56.623 ' 00:09:56.623 22:54:35 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:56.623 22:54:35 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:56.623 22:54:35 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:56.623 22:54:35 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:56.623 22:54:35 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.623 22:54:35 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.623 22:54:35 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.623 22:54:35 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.623 22:54:35 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.623 22:54:35 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.623 22:54:35 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.623 22:54:35 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.623 22:54:35 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:56.623 22:54:35 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.623 22:54:35 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:56.623 22:54:35 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:56.672 22:54:35 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:56.673 22:54:35 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:56.673 22:54:35 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:56.673 22:54:35 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:56.673 22:54:35 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:56.673 22:54:35 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:56.673 22:54:35 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:56.673 22:54:35 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.673 22:54:35 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:56.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.189 Waiting for block devices as requested 00:09:57.189 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.189 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.189 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.189 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.460 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:02.460 22:54:41 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:02.460 22:54:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:02.460 22:54:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:02.460 22:54:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.460 22:54:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:02.460 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.461 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.462 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:02.463 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.464 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.465 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.466 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:02.467 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.468 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:02.469 22:54:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:02.469 22:54:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:02.469 22:54:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.469 22:54:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.469 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.470 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.471 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.472 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.473 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.474 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.475 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.476 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:02.477 22:54:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:02.477 22:54:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:02.477 22:54:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.477 22:54:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:02.477 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:02.743 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.744 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.745 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.746 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:02.747 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.748 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.749 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.750 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:02.751 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:02.752 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.753 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:02.754 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:02.755 22:54:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:02.755 22:54:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:02.755 22:54:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.755 22:54:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:02.755 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:02.756 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.757 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:02.758 22:54:41 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:02.759 22:54:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:02.759 22:54:41 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:02.759 22:54:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:02.759 22:54:41 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:02.759 22:54:41 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:03.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:03.584 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.584 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.584 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.841 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.841 22:54:42 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:03.842 22:54:42 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:03.842 22:54:42 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.842 22:54:42 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:03.842 ************************************ 00:10:03.842 START TEST nvme_flexible_data_placement 00:10:03.842 ************************************ 00:10:03.842 22:54:42 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:04.100 Initializing NVMe Controllers 00:10:04.100 Attaching to 0000:00:13.0 00:10:04.100 Controller supports FDP Attached to 0000:00:13.0 00:10:04.100 Namespace ID: 1 Endurance Group ID: 1 00:10:04.100 Initialization complete. 00:10:04.100 00:10:04.100 ================================== 00:10:04.100 == FDP tests for Namespace: #01 == 00:10:04.100 ================================== 00:10:04.100 00:10:04.100 Get Feature: FDP: 00:10:04.100 ================= 00:10:04.100 Enabled: Yes 00:10:04.100 FDP configuration Index: 0 00:10:04.100 00:10:04.100 FDP configurations log page 00:10:04.100 =========================== 00:10:04.100 Number of FDP configurations: 1 00:10:04.100 Version: 0 00:10:04.100 Size: 112 00:10:04.100 FDP Configuration Descriptor: 0 00:10:04.100 Descriptor Size: 96 00:10:04.100 Reclaim Group Identifier format: 2 00:10:04.101 FDP Volatile Write Cache: Not Present 00:10:04.101 FDP Configuration: Valid 00:10:04.101 Vendor Specific Size: 0 00:10:04.101 Number of Reclaim Groups: 2 00:10:04.101 Number of Recalim Unit Handles: 8 00:10:04.101 Max Placement Identifiers: 128 00:10:04.101 Number of Namespaces Suppprted: 256 00:10:04.101 Reclaim unit Nominal Size: 6000000 bytes 00:10:04.101 Estimated Reclaim Unit Time Limit: Not Reported 00:10:04.101 RUH Desc #000: RUH Type: Initially Isolated 00:10:04.101 RUH Desc #001: RUH Type: Initially Isolated 00:10:04.101 RUH Desc #002: RUH Type: Initially Isolated 00:10:04.101 RUH Desc #003: RUH Type: Initially Isolated 00:10:04.101 RUH Desc #004: RUH Type: Initially Isolated 00:10:04.101 RUH Desc #005: RUH Type: Initially Isolated 00:10:04.101 RUH Desc #006: RUH Type: Initially Isolated 00:10:04.101 RUH Desc #007: RUH Type: Initially Isolated 00:10:04.101 00:10:04.101 FDP reclaim unit handle usage log page 00:10:04.101 ====================================== 00:10:04.101 Number of Reclaim Unit Handles: 8 00:10:04.101 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:04.101 RUH Usage Desc #001: RUH Attributes: Unused 00:10:04.101 RUH Usage Desc #002: RUH Attributes: Unused 00:10:04.101 RUH Usage Desc #003: RUH Attributes: Unused 00:10:04.101 RUH Usage Desc #004: RUH Attributes: Unused 00:10:04.101 RUH Usage Desc #005: RUH Attributes: Unused 00:10:04.101 RUH Usage Desc #006: RUH Attributes: Unused 00:10:04.101 RUH Usage Desc #007: RUH Attributes: Unused 00:10:04.101 00:10:04.101 FDP statistics log page 00:10:04.101 ======================= 00:10:04.101 Host bytes with metadata written: 1501323264 00:10:04.101 Media bytes with metadata written: 1502232576 00:10:04.101 Media bytes erased: 0 00:10:04.101 00:10:04.101 FDP Reclaim unit handle status 00:10:04.101 ============================== 00:10:04.101 Number of RUHS descriptors: 2 00:10:04.101 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000083a 00:10:04.101 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:04.101 00:10:04.101 FDP write on placement id: 0 success 00:10:04.101 00:10:04.101 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:04.101 00:10:04.101 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:04.101 00:10:04.101 Get Feature: FDP Events for Placement handle: #0 00:10:04.101 ======================== 00:10:04.101 Number of FDP Events: 6 00:10:04.101 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:04.101 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:04.101 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:04.101 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:04.101 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:04.101 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:04.101 00:10:04.101 FDP events log page 00:10:04.101 =================== 00:10:04.101 Number of FDP events: 1 00:10:04.101 FDP Event #0: 00:10:04.101 Event Type: RU Not Written to Capacity 00:10:04.101 Placement Identifier: Valid 00:10:04.101 NSID: Valid 00:10:04.101 Location: Valid 00:10:04.101 Placement Identifier: 0 00:10:04.101 Event Timestamp: 2 00:10:04.101 Namespace Identifier: 1 00:10:04.101 Reclaim Group Identifier: 0 00:10:04.101 Reclaim Unit Handle Identifier: 0 00:10:04.101 00:10:04.101 FDP test passed 00:10:04.101 00:10:04.101 real 0m0.231s 00:10:04.101 user 0m0.071s 00:10:04.101 sys 0m0.059s 00:10:04.101 22:54:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.101 22:54:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 ************************************ 00:10:04.101 END TEST nvme_flexible_data_placement 00:10:04.101 ************************************ 00:10:04.101 00:10:04.101 real 0m7.598s 00:10:04.101 user 0m1.100s 00:10:04.101 sys 0m1.400s 00:10:04.101 22:54:43 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.101 22:54:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 ************************************ 00:10:04.101 END TEST nvme_fdp 00:10:04.101 ************************************ 00:10:04.101 22:54:43 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:04.101 22:54:43 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:04.101 22:54:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.101 22:54:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.101 22:54:43 -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 ************************************ 00:10:04.101 START TEST nvme_rpc 00:10:04.101 ************************************ 00:10:04.101 22:54:43 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:04.101 * Looking for test storage... 00:10:04.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:04.101 22:54:43 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.101 22:54:43 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.101 22:54:43 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.359 22:54:43 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.360 22:54:43 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.360 --rc genhtml_branch_coverage=1 00:10:04.360 --rc genhtml_function_coverage=1 00:10:04.360 --rc genhtml_legend=1 00:10:04.360 --rc geninfo_all_blocks=1 00:10:04.360 --rc geninfo_unexecuted_blocks=1 00:10:04.360 00:10:04.360 ' 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.360 --rc genhtml_branch_coverage=1 00:10:04.360 --rc genhtml_function_coverage=1 00:10:04.360 --rc genhtml_legend=1 00:10:04.360 --rc geninfo_all_blocks=1 00:10:04.360 --rc geninfo_unexecuted_blocks=1 00:10:04.360 00:10:04.360 ' 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.360 --rc genhtml_branch_coverage=1 00:10:04.360 --rc genhtml_function_coverage=1 00:10:04.360 --rc genhtml_legend=1 00:10:04.360 --rc geninfo_all_blocks=1 00:10:04.360 --rc geninfo_unexecuted_blocks=1 00:10:04.360 00:10:04.360 ' 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.360 --rc genhtml_branch_coverage=1 00:10:04.360 --rc genhtml_function_coverage=1 00:10:04.360 --rc genhtml_legend=1 00:10:04.360 --rc geninfo_all_blocks=1 00:10:04.360 --rc geninfo_unexecuted_blocks=1 00:10:04.360 00:10:04.360 ' 00:10:04.360 22:54:43 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.360 22:54:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:04.360 22:54:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:04.360 22:54:43 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=79118 00:10:04.360 22:54:43 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:04.360 22:54:43 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:04.360 22:54:43 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 79118 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 79118 ']' 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.360 22:54:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.360 [2024-11-26 22:54:43.393014] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:04.360 [2024-11-26 22:54:43.393141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79118 ] 00:10:04.618 [2024-11-26 22:54:43.527508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:04.618 [2024-11-26 22:54:43.557792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.618 [2024-11-26 22:54:43.583330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.618 [2024-11-26 22:54:43.583428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.186 22:54:44 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.186 22:54:44 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:05.186 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:05.447 Nvme0n1 00:10:05.447 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:05.447 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:05.708 request: 00:10:05.708 { 00:10:05.708 "bdev_name": "Nvme0n1", 00:10:05.708 "filename": "non_existing_file", 00:10:05.708 "method": "bdev_nvme_apply_firmware", 00:10:05.708 "req_id": 1 00:10:05.708 } 00:10:05.708 Got JSON-RPC error response 00:10:05.708 response: 00:10:05.708 { 00:10:05.708 "code": -32603, 00:10:05.708 "message": "open file failed." 00:10:05.708 } 00:10:05.708 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:05.708 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:05.708 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:05.969 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:05.969 22:54:44 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 79118 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 79118 ']' 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 79118 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79118 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.969 killing process with pid 79118 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79118' 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@973 -- # kill 79118 00:10:05.969 22:54:44 nvme_rpc -- common/autotest_common.sh@978 -- # wait 79118 00:10:06.230 00:10:06.230 real 0m2.141s 00:10:06.230 user 0m4.108s 00:10:06.230 sys 0m0.550s 00:10:06.230 22:54:45 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.230 22:54:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.230 ************************************ 00:10:06.230 END TEST nvme_rpc 00:10:06.230 ************************************ 00:10:06.230 22:54:45 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:06.230 22:54:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.230 22:54:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.230 22:54:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.230 ************************************ 00:10:06.230 START TEST nvme_rpc_timeouts 00:10:06.231 ************************************ 00:10:06.231 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:06.491 * Looking for test storage... 00:10:06.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.491 22:54:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.491 --rc genhtml_branch_coverage=1 00:10:06.491 --rc genhtml_function_coverage=1 00:10:06.491 --rc genhtml_legend=1 00:10:06.491 --rc geninfo_all_blocks=1 00:10:06.491 --rc geninfo_unexecuted_blocks=1 00:10:06.491 00:10:06.491 ' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.491 --rc genhtml_branch_coverage=1 00:10:06.491 --rc genhtml_function_coverage=1 00:10:06.491 --rc genhtml_legend=1 00:10:06.491 --rc geninfo_all_blocks=1 00:10:06.491 --rc geninfo_unexecuted_blocks=1 00:10:06.491 00:10:06.491 ' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.491 --rc genhtml_branch_coverage=1 00:10:06.491 --rc genhtml_function_coverage=1 00:10:06.491 --rc genhtml_legend=1 00:10:06.491 --rc geninfo_all_blocks=1 00:10:06.491 --rc geninfo_unexecuted_blocks=1 00:10:06.491 00:10:06.491 ' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.491 --rc genhtml_branch_coverage=1 00:10:06.491 --rc genhtml_function_coverage=1 00:10:06.491 --rc genhtml_legend=1 00:10:06.491 --rc geninfo_all_blocks=1 00:10:06.491 --rc geninfo_unexecuted_blocks=1 00:10:06.491 00:10:06.491 ' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.491 22:54:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79171 00:10:06.491 22:54:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79171 00:10:06.491 22:54:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79207 00:10:06.491 22:54:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:06.491 22:54:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79207 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79207 ']' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.491 22:54:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:06.491 [2024-11-26 22:54:45.528390] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:10:06.491 [2024-11-26 22:54:45.528514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79207 ] 00:10:06.752 [2024-11-26 22:54:45.663160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:06.752 [2024-11-26 22:54:45.691440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.752 [2024-11-26 22:54:45.716824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.752 [2024-11-26 22:54:45.716883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.324 22:54:46 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.324 22:54:46 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:07.324 Checking default timeout settings: 00:10:07.325 22:54:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:07.325 22:54:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:07.586 Making settings changes with rpc: 00:10:07.586 22:54:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:07.586 22:54:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:07.845 Check default vs. modified settings: 00:10:07.845 22:54:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:07.845 22:54:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79171 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79171 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:08.106 Setting action_on_timeout is changed as expected. 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:08.106 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79171 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79171 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:08.367 Setting timeout_us is changed as expected. 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79171 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79171 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:08.367 Setting timeout_admin_us is changed as expected. 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:08.367 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:08.368 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:08.368 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79171 /tmp/settings_modified_79171 00:10:08.368 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79207 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79207 ']' 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79207 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79207 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.368 killing process with pid 79207 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79207' 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79207 00:10:08.368 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79207 00:10:08.941 RPC TIMEOUT SETTING TEST PASSED. 00:10:08.941 22:54:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:08.941 00:10:08.941 real 0m2.526s 00:10:08.941 user 0m4.912s 00:10:08.941 sys 0m0.586s 00:10:08.941 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.941 ************************************ 00:10:08.941 END TEST nvme_rpc_timeouts 00:10:08.941 ************************************ 00:10:08.941 22:54:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:08.941 22:54:47 -- spdk/autotest.sh@239 -- # uname -s 00:10:08.941 22:54:47 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:08.941 22:54:47 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:08.941 22:54:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.941 22:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.941 22:54:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.941 ************************************ 00:10:08.941 START TEST sw_hotplug 00:10:08.941 ************************************ 00:10:08.941 22:54:47 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:08.941 * Looking for test storage... 00:10:08.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.941 22:54:47 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.941 22:54:47 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.941 22:54:47 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.941 22:54:48 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.941 22:54:48 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.201 22:54:48 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:09.201 22:54:48 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.201 22:54:48 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.201 --rc genhtml_branch_coverage=1 00:10:09.201 --rc genhtml_function_coverage=1 00:10:09.201 --rc genhtml_legend=1 00:10:09.201 --rc geninfo_all_blocks=1 00:10:09.201 --rc geninfo_unexecuted_blocks=1 00:10:09.201 00:10:09.201 ' 00:10:09.201 22:54:48 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.201 --rc genhtml_branch_coverage=1 00:10:09.201 --rc genhtml_function_coverage=1 00:10:09.201 --rc genhtml_legend=1 00:10:09.201 --rc geninfo_all_blocks=1 00:10:09.201 --rc geninfo_unexecuted_blocks=1 00:10:09.201 00:10:09.201 ' 00:10:09.201 22:54:48 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.201 --rc genhtml_branch_coverage=1 00:10:09.201 --rc genhtml_function_coverage=1 00:10:09.201 --rc genhtml_legend=1 00:10:09.201 --rc geninfo_all_blocks=1 00:10:09.201 --rc geninfo_unexecuted_blocks=1 00:10:09.201 00:10:09.201 ' 00:10:09.201 22:54:48 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.201 --rc genhtml_branch_coverage=1 00:10:09.201 --rc genhtml_function_coverage=1 00:10:09.201 --rc genhtml_legend=1 00:10:09.201 --rc geninfo_all_blocks=1 00:10:09.201 --rc geninfo_unexecuted_blocks=1 00:10:09.201 00:10:09.201 ' 00:10:09.201 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:09.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.536 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.536 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.536 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.536 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.536 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:09.536 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:09.536 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:09.536 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:09.536 22:54:48 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:09.536 22:54:48 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:09.536 22:54:48 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:09.536 22:54:48 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:09.537 22:54:48 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:09.537 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:09.537 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:09.537 22:54:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:09.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:10.069 Waiting for block devices as requested 00:10:10.069 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.069 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.326 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.326 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.605 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:15.605 22:54:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:15.605 22:54:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:15.863 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:15.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:15.863 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:16.122 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:16.380 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.380 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:16.380 22:54:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=80054 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:16.380 22:54:55 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:16.380 22:54:55 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:16.380 22:54:55 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:16.380 22:54:55 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:16.380 22:54:55 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:16.380 22:54:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:16.638 Initializing NVMe Controllers 00:10:16.638 Attaching to 0000:00:10.0 00:10:16.638 Attaching to 0000:00:11.0 00:10:16.638 Attached to 0000:00:10.0 00:10:16.638 Attached to 0000:00:11.0 00:10:16.638 Initialization complete. Starting I/O... 00:10:16.638 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:16.638 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:16.638 00:10:17.574 QEMU NVMe Ctrl (12340 ): 3012 I/Os completed (+3012) 00:10:17.574 QEMU NVMe Ctrl (12341 ): 3051 I/Os completed (+3051) 00:10:17.574 00:10:18.948 QEMU NVMe Ctrl (12340 ): 6816 I/Os completed (+3804) 00:10:18.948 QEMU NVMe Ctrl (12341 ): 6863 I/Os completed (+3812) 00:10:18.948 00:10:19.881 QEMU NVMe Ctrl (12340 ): 10371 I/Os completed (+3555) 00:10:19.881 QEMU NVMe Ctrl (12341 ): 10455 I/Os completed (+3592) 00:10:19.881 00:10:20.813 QEMU NVMe Ctrl (12340 ): 14379 I/Os completed (+4008) 00:10:20.813 QEMU NVMe Ctrl (12341 ): 14564 I/Os completed (+4109) 00:10:20.813 00:10:21.815 QEMU NVMe Ctrl (12340 ): 17914 I/Os completed (+3535) 00:10:21.815 QEMU NVMe Ctrl (12341 ): 18074 I/Os completed (+3510) 00:10:21.815 00:10:22.382 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.382 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.382 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.382 [2024-11-26 22:55:01.460699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:22.382 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:22.382 [2024-11-26 22:55:01.461786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.461831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.461848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.461862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.382 [2024-11-26 22:55:01.463098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.463143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.463159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.463170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.382 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.382 [2024-11-26 22:55:01.475599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:22.382 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:22.382 [2024-11-26 22:55:01.476527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.476566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.476581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.476597] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.382 [2024-11-26 22:55:01.477645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.477676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.477692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 [2024-11-26 22:55:01.477706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.382 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:22.382 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.382 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:22.382 EAL: Scan for (pci) bus failed. 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.640 Attaching to 0000:00:10.0 00:10:22.640 Attached to 0000:00:10.0 00:10:22.640 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:22.640 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.640 22:55:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.640 Attaching to 0000:00:11.0 00:10:22.640 Attached to 0000:00:11.0 00:10:23.574 QEMU NVMe Ctrl (12340 ): 3458 I/Os completed (+3458) 00:10:23.574 QEMU NVMe Ctrl (12341 ): 3170 I/Os completed (+3170) 00:10:23.574 00:10:24.948 QEMU NVMe Ctrl (12340 ): 7198 I/Os completed (+3740) 00:10:24.948 QEMU NVMe Ctrl (12341 ): 6998 I/Os completed (+3828) 00:10:24.948 00:10:25.882 QEMU NVMe Ctrl (12340 ): 11110 I/Os completed (+3912) 00:10:25.882 QEMU NVMe Ctrl (12341 ): 10938 I/Os completed (+3940) 00:10:25.882 00:10:26.815 QEMU NVMe Ctrl (12340 ): 14719 I/Os completed (+3609) 00:10:26.815 QEMU NVMe Ctrl (12341 ): 14530 I/Os completed (+3592) 00:10:26.815 00:10:27.750 QEMU NVMe Ctrl (12340 ): 18229 I/Os completed (+3510) 00:10:27.750 QEMU NVMe Ctrl (12341 ): 18000 I/Os completed (+3470) 00:10:27.750 00:10:28.684 QEMU NVMe Ctrl (12340 ): 22212 I/Os completed (+3983) 00:10:28.684 QEMU NVMe Ctrl (12341 ): 21844 I/Os completed (+3844) 00:10:28.684 00:10:29.617 QEMU NVMe Ctrl (12340 ): 26405 I/Os completed (+4193) 00:10:29.617 QEMU NVMe Ctrl (12341 ): 26037 I/Os completed (+4193) 00:10:29.617 00:10:30.557 QEMU NVMe Ctrl (12340 ): 30261 I/Os completed (+3856) 00:10:30.557 QEMU NVMe Ctrl (12341 ): 29816 I/Os completed (+3779) 00:10:30.557 00:10:31.931 QEMU NVMe Ctrl (12340 ): 34557 I/Os completed (+4296) 00:10:31.931 QEMU NVMe Ctrl (12341 ): 33973 I/Os completed (+4157) 00:10:31.931 00:10:32.865 QEMU NVMe Ctrl (12340 ): 39111 I/Os completed (+4554) 00:10:32.865 QEMU NVMe Ctrl (12341 ): 38344 I/Os completed (+4371) 00:10:32.865 00:10:33.798 QEMU NVMe Ctrl (12340 ): 43697 I/Os completed (+4586) 00:10:33.798 QEMU NVMe Ctrl (12341 ): 42695 I/Os completed (+4351) 00:10:33.798 00:10:34.733 QEMU NVMe Ctrl (12340 ): 48258 I/Os completed (+4561) 00:10:34.733 QEMU NVMe Ctrl (12341 ): 46972 I/Os completed (+4277) 00:10:34.733 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.733 [2024-11-26 22:55:13.738272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:34.733 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:34.733 [2024-11-26 22:55:13.739107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.739139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.739152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.739168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:34.733 [2024-11-26 22:55:13.740217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.740245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.740256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.740266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.733 [2024-11-26 22:55:13.756263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:34.733 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:34.733 [2024-11-26 22:55:13.756980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.757012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.757024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.757037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:34.733 [2024-11-26 22:55:13.757840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.757872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.757883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 [2024-11-26 22:55:13.757895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:34.733 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:34.733 EAL: Scan for (pci) bus failed. 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.733 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:34.992 Attaching to 0000:00:10.0 00:10:34.992 Attached to 0000:00:10.0 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.992 22:55:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:34.992 Attaching to 0000:00:11.0 00:10:34.992 Attached to 0000:00:11.0 00:10:35.558 QEMU NVMe Ctrl (12340 ): 3207 I/Os completed (+3207) 00:10:35.559 QEMU NVMe Ctrl (12341 ): 2654 I/Os completed (+2654) 00:10:35.559 00:10:36.929 QEMU NVMe Ctrl (12340 ): 6898 I/Os completed (+3691) 00:10:36.929 QEMU NVMe Ctrl (12341 ): 6380 I/Os completed (+3726) 00:10:36.929 00:10:37.862 QEMU NVMe Ctrl (12340 ): 10898 I/Os completed (+4000) 00:10:37.862 QEMU NVMe Ctrl (12341 ): 10419 I/Os completed (+4039) 00:10:37.862 00:10:38.804 QEMU NVMe Ctrl (12340 ): 15256 I/Os completed (+4358) 00:10:38.804 QEMU NVMe Ctrl (12341 ): 14726 I/Os completed (+4307) 00:10:38.804 00:10:39.740 QEMU NVMe Ctrl (12340 ): 19564 I/Os completed (+4308) 00:10:39.740 QEMU NVMe Ctrl (12341 ): 18954 I/Os completed (+4228) 00:10:39.740 00:10:40.680 QEMU NVMe Ctrl (12340 ): 24148 I/Os completed (+4584) 00:10:40.680 QEMU NVMe Ctrl (12341 ): 23234 I/Os completed (+4280) 00:10:40.680 00:10:41.613 QEMU NVMe Ctrl (12340 ): 27858 I/Os completed (+3710) 00:10:41.613 QEMU NVMe Ctrl (12341 ): 26973 I/Os completed (+3739) 00:10:41.613 00:10:42.547 QEMU NVMe Ctrl (12340 ): 32159 I/Os completed (+4301) 00:10:42.547 QEMU NVMe Ctrl (12341 ): 31129 I/Os completed (+4156) 00:10:42.547 00:10:43.921 QEMU NVMe Ctrl (12340 ): 36660 I/Os completed (+4501) 00:10:43.921 QEMU NVMe Ctrl (12341 ): 35419 I/Os completed (+4290) 00:10:43.921 00:10:44.855 QEMU NVMe Ctrl (12340 ): 41082 I/Os completed (+4422) 00:10:44.855 QEMU NVMe Ctrl (12341 ): 39702 I/Os completed (+4283) 00:10:44.855 00:10:45.791 QEMU NVMe Ctrl (12340 ): 45703 I/Os completed (+4621) 00:10:45.791 QEMU NVMe Ctrl (12341 ): 44072 I/Os completed (+4370) 00:10:45.791 00:10:46.734 QEMU NVMe Ctrl (12340 ): 50359 I/Os completed (+4656) 00:10:46.734 QEMU NVMe Ctrl (12341 ): 48643 I/Os completed (+4571) 00:10:46.734 00:10:46.994 22:55:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:46.994 22:55:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:46.994 22:55:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.994 22:55:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.994 [2024-11-26 22:55:25.999696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:46.994 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:46.994 [2024-11-26 22:55:26.001922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.001948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.001963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.001973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:46.994 [2024-11-26 22:55:26.002965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.003003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.003017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.003027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.994 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.994 [2024-11-26 22:55:26.022945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:46.994 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:46.994 [2024-11-26 22:55:26.023798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.023900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.023955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.023979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:46.994 [2024-11-26 22:55:26.024895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.024977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.025001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 [2024-11-26 22:55:26.025046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.994 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:46.994 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:46.994 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:46.994 EAL: Scan for (pci) bus failed. 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:47.254 Attaching to 0000:00:10.0 00:10:47.254 Attached to 0000:00:10.0 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.254 22:55:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:47.254 Attaching to 0000:00:11.0 00:10:47.254 Attached to 0000:00:11.0 00:10:47.254 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:47.254 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:47.254 [2024-11-26 22:55:26.307538] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:59.490 22:55:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:59.490 22:55:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:59.490 22:55:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.85 00:10:59.490 22:55:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.85 00:10:59.490 22:55:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:59.490 22:55:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:10:59.490 22:55:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:10:59.490 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 22:55:38 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 80054 00:11:06.070 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (80054) - No such process 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 80054 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80605 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:06.070 22:55:44 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80605 00:11:06.070 22:55:44 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80605 ']' 00:11:06.070 22:55:44 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.070 22:55:44 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.070 22:55:44 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.070 22:55:44 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.070 22:55:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.070 [2024-11-26 22:55:44.389123] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:11:06.070 [2024-11-26 22:55:44.389240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80605 ] 00:11:06.070 [2024-11-26 22:55:44.521761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:06.070 [2024-11-26 22:55:44.553359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.070 [2024-11-26 22:55:44.571371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.330 22:55:45 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.330 22:55:45 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:06.330 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:06.330 22:55:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.330 22:55:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 22:55:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:06.331 22:55:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:06.331 22:55:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:06.331 22:55:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:06.331 22:55:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:06.331 22:55:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:06.331 22:55:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.913 22:55:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.913 22:55:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.913 22:55:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.913 [2024-11-26 22:55:51.319845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:12.913 [2024-11-26 22:55:51.320928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.320961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.320975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 [2024-11-26 22:55:51.320989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.320996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.321006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 [2024-11-26 22:55:51.321013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.321020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.321027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 [2024-11-26 22:55:51.321034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.321041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.321048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 [2024-11-26 22:55:51.719841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:12.913 [2024-11-26 22:55:51.720916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.720946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.720957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 [2024-11-26 22:55:51.720968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.720976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.720983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 [2024-11-26 22:55:51.720991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.720998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.721007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 [2024-11-26 22:55:51.721014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.913 [2024-11-26 22:55:51.721021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.913 [2024-11-26 22:55:51.721027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.913 22:55:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.913 22:55:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.913 22:55:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.913 22:55:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.913 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.913 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.913 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.913 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.913 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:13.174 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:13.174 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.174 22:55:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.476 22:56:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.476 22:56:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.476 22:56:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.476 22:56:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.476 22:56:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.476 22:56:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:25.476 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:25.476 [2024-11-26 22:56:04.220022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:25.476 [2024-11-26 22:56:04.221078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.477 [2024-11-26 22:56:04.221107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.477 [2024-11-26 22:56:04.221118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.477 [2024-11-26 22:56:04.221131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.477 [2024-11-26 22:56:04.221138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.477 [2024-11-26 22:56:04.221147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.477 [2024-11-26 22:56:04.221153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.477 [2024-11-26 22:56:04.221161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.477 [2024-11-26 22:56:04.221168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.477 [2024-11-26 22:56:04.221175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.477 [2024-11-26 22:56:04.221182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.477 [2024-11-26 22:56:04.221193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.736 [2024-11-26 22:56:04.620018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:25.736 [2024-11-26 22:56:04.620998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.736 [2024-11-26 22:56:04.621025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.736 [2024-11-26 22:56:04.621035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.736 [2024-11-26 22:56:04.621044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.736 [2024-11-26 22:56:04.621053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.736 [2024-11-26 22:56:04.621059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.736 [2024-11-26 22:56:04.621068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.736 [2024-11-26 22:56:04.621075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.736 [2024-11-26 22:56:04.621082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.736 [2024-11-26 22:56:04.621088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.736 [2024-11-26 22:56:04.621096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.736 [2024-11-26 22:56:04.621102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.736 22:56:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.736 22:56:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.736 22:56:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.736 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:25.997 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:25.997 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.997 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.997 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.997 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:25.997 22:56:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:25.997 22:56:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.997 22:56:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.241 22:56:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.241 22:56:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.241 22:56:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.241 22:56:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.241 22:56:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.241 22:56:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.241 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.241 [2024-11-26 22:56:17.120213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.241 [2024-11-26 22:56:17.121264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.241 [2024-11-26 22:56:17.121309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.242 [2024-11-26 22:56:17.121320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.242 [2024-11-26 22:56:17.121333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.242 [2024-11-26 22:56:17.121341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.242 [2024-11-26 22:56:17.121349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.242 [2024-11-26 22:56:17.121356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.242 [2024-11-26 22:56:17.121365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.242 [2024-11-26 22:56:17.121371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.242 [2024-11-26 22:56:17.121379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.242 [2024-11-26 22:56:17.121388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.242 [2024-11-26 22:56:17.121396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.503 [2024-11-26 22:56:17.520214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.503 [2024-11-26 22:56:17.521180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.503 [2024-11-26 22:56:17.521209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.503 [2024-11-26 22:56:17.521219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.503 [2024-11-26 22:56:17.521228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.503 [2024-11-26 22:56:17.521238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.503 [2024-11-26 22:56:17.521245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.503 [2024-11-26 22:56:17.521253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.503 [2024-11-26 22:56:17.521259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.503 [2024-11-26 22:56:17.521268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.503 [2024-11-26 22:56:17.521274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.503 [2024-11-26 22:56:17.521281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.503 [2024-11-26 22:56:17.521287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.503 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:38.503 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.503 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.503 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.503 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.503 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.503 22:56:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.503 22:56:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.503 22:56:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.763 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.024 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.024 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.024 22:56:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.72 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.72 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.72 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.72 2 00:11:51.252 remove_attach_helper took 44.72s to complete (handling 2 nvme drive(s)) 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:51.252 22:56:29 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:51.252 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:51.253 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:51.253 22:56:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:57.879 22:56:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.879 22:56:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.879 22:56:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.879 22:56:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.879 22:56:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.879 22:56:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:57.879 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:57.879 [2024-11-26 22:56:36.071213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:57.879 [2024-11-26 22:56:36.071972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.879 [2024-11-26 22:56:36.072008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.879 [2024-11-26 22:56:36.072019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 [2024-11-26 22:56:36.072033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.880 [2024-11-26 22:56:36.072040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.880 [2024-11-26 22:56:36.072049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 [2024-11-26 22:56:36.072057] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.880 [2024-11-26 22:56:36.072066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.880 [2024-11-26 22:56:36.072073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 [2024-11-26 22:56:36.072081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.880 [2024-11-26 22:56:36.072087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.880 [2024-11-26 22:56:36.072095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 [2024-11-26 22:56:36.471216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:57.880 [2024-11-26 22:56:36.471982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.880 [2024-11-26 22:56:36.472012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.880 [2024-11-26 22:56:36.472024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 [2024-11-26 22:56:36.472035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.880 [2024-11-26 22:56:36.472043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.880 [2024-11-26 22:56:36.472051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 [2024-11-26 22:56:36.472058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.880 [2024-11-26 22:56:36.472065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.880 [2024-11-26 22:56:36.472073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 [2024-11-26 22:56:36.472079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.880 [2024-11-26 22:56:36.472089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.880 [2024-11-26 22:56:36.472096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.880 22:56:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.880 22:56:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.880 22:56:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.880 22:56:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.118 22:56:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.118 22:56:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.118 22:56:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.118 22:56:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.118 22:56:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.118 22:56:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:10.118 22:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:10.118 [2024-11-26 22:56:48.971401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:10.118 [2024-11-26 22:56:48.972155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.118 [2024-11-26 22:56:48.972186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.118 [2024-11-26 22:56:48.972197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.118 [2024-11-26 22:56:48.972210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.118 [2024-11-26 22:56:48.972218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.118 [2024-11-26 22:56:48.972226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.118 [2024-11-26 22:56:48.972232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.118 [2024-11-26 22:56:48.972241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.118 [2024-11-26 22:56:48.972248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.118 [2024-11-26 22:56:48.972255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.118 [2024-11-26 22:56:48.972261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.118 [2024-11-26 22:56:48.972268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.378 22:56:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.378 22:56:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.378 22:56:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:10.378 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:10.378 [2024-11-26 22:56:49.471401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:10.378 [2024-11-26 22:56:49.472121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.378 [2024-11-26 22:56:49.472145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.378 [2024-11-26 22:56:49.472155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.378 [2024-11-26 22:56:49.472164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.378 [2024-11-26 22:56:49.472173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.378 [2024-11-26 22:56:49.472180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.378 [2024-11-26 22:56:49.472187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.378 [2024-11-26 22:56:49.472194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.378 [2024-11-26 22:56:49.472201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.378 [2024-11-26 22:56:49.472207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.378 [2024-11-26 22:56:49.472215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.378 [2024-11-26 22:56:49.472222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.949 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:10.949 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.949 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.949 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.949 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.949 22:56:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.949 22:56:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.949 22:56:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.949 22:56:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.949 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:10.949 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.209 22:56:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.463 22:57:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.463 22:57:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.463 22:57:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.463 22:57:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.463 22:57:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.463 22:57:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:23.463 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:23.463 [2024-11-26 22:57:02.371601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:23.463 [2024-11-26 22:57:02.372374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.463 [2024-11-26 22:57:02.372400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.463 [2024-11-26 22:57:02.372411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.463 [2024-11-26 22:57:02.372427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.463 [2024-11-26 22:57:02.372434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.463 [2024-11-26 22:57:02.372442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.463 [2024-11-26 22:57:02.372448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.463 [2024-11-26 22:57:02.372456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.464 [2024-11-26 22:57:02.372463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.464 [2024-11-26 22:57:02.372470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.464 [2024-11-26 22:57:02.372476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.464 [2024-11-26 22:57:02.372484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.724 [2024-11-26 22:57:02.771601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:23.724 [2024-11-26 22:57:02.772334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.724 [2024-11-26 22:57:02.772360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.724 [2024-11-26 22:57:02.772370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.724 [2024-11-26 22:57:02.772380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.724 [2024-11-26 22:57:02.772388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.724 [2024-11-26 22:57:02.772394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.724 [2024-11-26 22:57:02.772404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.724 [2024-11-26 22:57:02.772410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.724 [2024-11-26 22:57:02.772418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.724 [2024-11-26 22:57:02.772424] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.724 [2024-11-26 22:57:02.772431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.724 [2024-11-26 22:57:02.772439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.984 22:57:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.984 22:57:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.984 22:57:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:23.984 22:57:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:23.984 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:23.984 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:23.984 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:23.984 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:23.984 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:24.245 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:24.245 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.245 22:57:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:12:36.481 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80605 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80605 ']' 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80605 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80605 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.481 killing process with pid 80605 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80605' 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80605 00:12:36.481 22:57:15 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80605 00:12:36.481 22:57:15 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:36.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:37.311 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:37.311 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:37.311 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.311 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.573 00:12:37.573 real 2m28.581s 00:12:37.573 user 1m48.912s 00:12:37.573 sys 0m18.262s 00:12:37.573 ************************************ 00:12:37.573 END TEST sw_hotplug 00:12:37.573 22:57:16 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.573 22:57:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.573 ************************************ 00:12:37.573 22:57:16 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:37.573 22:57:16 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:37.573 22:57:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.573 22:57:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.573 22:57:16 -- common/autotest_common.sh@10 -- # set +x 00:12:37.573 ************************************ 00:12:37.573 START TEST nvme_xnvme 00:12:37.573 ************************************ 00:12:37.573 22:57:16 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:37.573 * Looking for test storage... 00:12:37.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.573 22:57:16 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.573 22:57:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.573 22:57:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.839 22:57:16 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.839 --rc genhtml_branch_coverage=1 00:12:37.839 --rc genhtml_function_coverage=1 00:12:37.839 --rc genhtml_legend=1 00:12:37.839 --rc geninfo_all_blocks=1 00:12:37.839 --rc geninfo_unexecuted_blocks=1 00:12:37.839 00:12:37.839 ' 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.839 --rc genhtml_branch_coverage=1 00:12:37.839 --rc genhtml_function_coverage=1 00:12:37.839 --rc genhtml_legend=1 00:12:37.839 --rc geninfo_all_blocks=1 00:12:37.839 --rc geninfo_unexecuted_blocks=1 00:12:37.839 00:12:37.839 ' 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.839 --rc genhtml_branch_coverage=1 00:12:37.839 --rc genhtml_function_coverage=1 00:12:37.839 --rc genhtml_legend=1 00:12:37.839 --rc geninfo_all_blocks=1 00:12:37.839 --rc geninfo_unexecuted_blocks=1 00:12:37.839 00:12:37.839 ' 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.839 --rc genhtml_branch_coverage=1 00:12:37.839 --rc genhtml_function_coverage=1 00:12:37.839 --rc genhtml_legend=1 00:12:37.839 --rc geninfo_all_blocks=1 00:12:37.839 --rc geninfo_unexecuted_blocks=1 00:12:37.839 00:12:37.839 ' 00:12:37.839 22:57:16 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:37.839 22:57:16 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:37.839 22:57:16 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:37.839 22:57:16 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:37.840 22:57:16 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:37.840 22:57:16 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:37.840 22:57:16 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:37.840 #define SPDK_CONFIG_H 00:12:37.840 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:37.840 #define SPDK_CONFIG_APPS 1 00:12:37.840 #define SPDK_CONFIG_ARCH native 00:12:37.840 #define SPDK_CONFIG_ASAN 1 00:12:37.840 #undef SPDK_CONFIG_AVAHI 00:12:37.840 #undef SPDK_CONFIG_CET 00:12:37.840 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:37.840 #define SPDK_CONFIG_COVERAGE 1 00:12:37.840 #define SPDK_CONFIG_CROSS_PREFIX 00:12:37.840 #undef SPDK_CONFIG_CRYPTO 00:12:37.840 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:37.840 #undef SPDK_CONFIG_CUSTOMOCF 00:12:37.840 #undef SPDK_CONFIG_DAOS 00:12:37.840 #define SPDK_CONFIG_DAOS_DIR 00:12:37.840 #define SPDK_CONFIG_DEBUG 1 00:12:37.840 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:37.840 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:37.840 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:37.840 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:37.840 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:37.840 #undef SPDK_CONFIG_DPDK_UADK 00:12:37.840 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:37.840 #define SPDK_CONFIG_EXAMPLES 1 00:12:37.840 #undef SPDK_CONFIG_FC 00:12:37.840 #define SPDK_CONFIG_FC_PATH 00:12:37.840 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:37.840 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:37.840 #define SPDK_CONFIG_FSDEV 1 00:12:37.840 #undef SPDK_CONFIG_FUSE 00:12:37.840 #undef SPDK_CONFIG_FUZZER 00:12:37.840 #define SPDK_CONFIG_FUZZER_LIB 00:12:37.840 #undef SPDK_CONFIG_GOLANG 00:12:37.840 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:37.840 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:37.840 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:37.840 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:37.840 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:37.840 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:37.840 #undef SPDK_CONFIG_HAVE_LZ4 00:12:37.840 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:37.840 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:37.840 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:37.840 #define SPDK_CONFIG_IDXD 1 00:12:37.840 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:37.840 #undef SPDK_CONFIG_IPSEC_MB 00:12:37.840 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:37.840 #define SPDK_CONFIG_ISAL 1 00:12:37.840 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:37.840 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:37.840 #define SPDK_CONFIG_LIBDIR 00:12:37.840 #undef SPDK_CONFIG_LTO 00:12:37.840 #define SPDK_CONFIG_MAX_LCORES 128 00:12:37.840 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:37.840 #define SPDK_CONFIG_NVME_CUSE 1 00:12:37.840 #undef SPDK_CONFIG_OCF 00:12:37.840 #define SPDK_CONFIG_OCF_PATH 00:12:37.840 #define SPDK_CONFIG_OPENSSL_PATH 00:12:37.840 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:37.840 #define SPDK_CONFIG_PGO_DIR 00:12:37.840 #undef SPDK_CONFIG_PGO_USE 00:12:37.840 #define SPDK_CONFIG_PREFIX /usr/local 00:12:37.840 #undef SPDK_CONFIG_RAID5F 00:12:37.840 #undef SPDK_CONFIG_RBD 00:12:37.840 #define SPDK_CONFIG_RDMA 1 00:12:37.840 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:37.840 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:37.841 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:37.841 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:37.841 #define SPDK_CONFIG_SHARED 1 00:12:37.841 #undef SPDK_CONFIG_SMA 00:12:37.841 #define SPDK_CONFIG_TESTS 1 00:12:37.841 #undef SPDK_CONFIG_TSAN 00:12:37.841 #define SPDK_CONFIG_UBLK 1 00:12:37.841 #define SPDK_CONFIG_UBSAN 1 00:12:37.841 #undef SPDK_CONFIG_UNIT_TESTS 00:12:37.841 #undef SPDK_CONFIG_URING 00:12:37.841 #define SPDK_CONFIG_URING_PATH 00:12:37.841 #undef SPDK_CONFIG_URING_ZNS 00:12:37.841 #undef SPDK_CONFIG_USDT 00:12:37.841 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:37.841 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:37.841 #undef SPDK_CONFIG_VFIO_USER 00:12:37.841 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:37.841 #define SPDK_CONFIG_VHOST 1 00:12:37.841 #define SPDK_CONFIG_VIRTIO 1 00:12:37.841 #undef SPDK_CONFIG_VTUNE 00:12:37.841 #define SPDK_CONFIG_VTUNE_DIR 00:12:37.841 #define SPDK_CONFIG_WERROR 1 00:12:37.841 #define SPDK_CONFIG_WPDK_DIR 00:12:37.841 #define SPDK_CONFIG_XNVME 1 00:12:37.841 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:37.841 22:57:16 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.841 22:57:16 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.841 22:57:16 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.841 22:57:16 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.841 22:57:16 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.841 22:57:16 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.841 22:57:16 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.841 22:57:16 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.841 22:57:16 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:37.841 22:57:16 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:37.841 22:57:16 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:37.841 22:57:16 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@140 -- # : main 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:37.842 22:57:16 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81952 ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81952 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.rU5s0v 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.rU5s0v/tests/xnvme /tmp/spdk.rU5s0v 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13246341120 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6339207168 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13246341120 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6339207168 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265217024 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=176128 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97349152768 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2353627136 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:37.843 * Looking for test storage... 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13246341120 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.843 22:57:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.843 22:57:16 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:37.844 22:57:16 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.844 22:57:16 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.844 --rc genhtml_branch_coverage=1 00:12:37.844 --rc genhtml_function_coverage=1 00:12:37.844 --rc genhtml_legend=1 00:12:37.844 --rc geninfo_all_blocks=1 00:12:37.844 --rc geninfo_unexecuted_blocks=1 00:12:37.844 00:12:37.844 ' 00:12:37.844 22:57:16 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.844 --rc genhtml_branch_coverage=1 00:12:37.844 --rc genhtml_function_coverage=1 00:12:37.844 --rc genhtml_legend=1 00:12:37.844 --rc geninfo_all_blocks=1 00:12:37.844 --rc geninfo_unexecuted_blocks=1 00:12:37.844 00:12:37.844 ' 00:12:37.844 22:57:16 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.844 --rc genhtml_branch_coverage=1 00:12:37.844 --rc genhtml_function_coverage=1 00:12:37.844 --rc genhtml_legend=1 00:12:37.844 --rc geninfo_all_blocks=1 00:12:37.844 --rc geninfo_unexecuted_blocks=1 00:12:37.844 00:12:37.844 ' 00:12:37.844 22:57:16 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.844 --rc genhtml_branch_coverage=1 00:12:37.844 --rc genhtml_function_coverage=1 00:12:37.844 --rc genhtml_legend=1 00:12:37.844 --rc geninfo_all_blocks=1 00:12:37.844 --rc geninfo_unexecuted_blocks=1 00:12:37.844 00:12:37.844 ' 00:12:37.844 22:57:16 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.844 22:57:16 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.844 22:57:16 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.844 22:57:16 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.844 22:57:16 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.844 22:57:16 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:37.844 22:57:16 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:37.844 22:57:16 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:38.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:38.367 Waiting for block devices as requested 00:12:38.367 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.628 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.628 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.628 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.921 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:43.921 22:57:22 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:44.183 22:57:23 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:44.183 22:57:23 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:44.444 22:57:23 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:44.444 22:57:23 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:44.444 No valid GPT data, bailing 00:12:44.444 22:57:23 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:44.444 22:57:23 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:44.444 22:57:23 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:44.444 22:57:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:44.444 22:57:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.444 22:57:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.444 22:57:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.444 ************************************ 00:12:44.444 START TEST xnvme_rpc 00:12:44.444 ************************************ 00:12:44.444 22:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:44.444 22:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82344 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82344 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82344 ']' 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.445 22:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 [2024-11-26 22:57:23.604710] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:12:44.706 [2024-11-26 22:57:23.604846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82344 ] 00:12:44.706 [2024-11-26 22:57:23.742589] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:44.706 [2024-11-26 22:57:23.773211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.706 [2024-11-26 22:57:23.802050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.652 xnvme_bdev 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82344 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82344 ']' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82344 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82344 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.652 killing process with pid 82344 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82344' 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82344 00:12:45.652 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82344 00:12:45.913 00:12:45.913 real 0m1.432s 00:12:45.913 user 0m1.467s 00:12:45.913 sys 0m0.435s 00:12:45.913 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.913 22:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.913 ************************************ 00:12:45.913 END TEST xnvme_rpc 00:12:45.913 ************************************ 00:12:45.913 22:57:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:45.913 22:57:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:45.913 22:57:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.913 22:57:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.913 ************************************ 00:12:45.913 START TEST xnvme_bdevperf 00:12:45.913 ************************************ 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:45.913 22:57:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.176 { 00:12:46.176 "subsystems": [ 00:12:46.176 { 00:12:46.176 "subsystem": "bdev", 00:12:46.176 "config": [ 00:12:46.176 { 00:12:46.176 "params": { 00:12:46.176 "io_mechanism": "libaio", 00:12:46.176 "conserve_cpu": false, 00:12:46.176 "filename": "/dev/nvme0n1", 00:12:46.176 "name": "xnvme_bdev" 00:12:46.176 }, 00:12:46.176 "method": "bdev_xnvme_create" 00:12:46.176 }, 00:12:46.176 { 00:12:46.176 "method": "bdev_wait_for_examine" 00:12:46.176 } 00:12:46.176 ] 00:12:46.176 } 00:12:46.176 ] 00:12:46.176 } 00:12:46.176 [2024-11-26 22:57:25.096813] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:12:46.176 [2024-11-26 22:57:25.096943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82396 ] 00:12:46.176 [2024-11-26 22:57:25.233416] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:46.176 [2024-11-26 22:57:25.264738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.176 [2024-11-26 22:57:25.293410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.438 Running I/O for 5 seconds... 00:12:48.332 30492.00 IOPS, 119.11 MiB/s [2024-11-26T22:57:28.845Z] 29616.00 IOPS, 115.69 MiB/s [2024-11-26T22:57:29.788Z] 29457.33 IOPS, 115.07 MiB/s [2024-11-26T22:57:30.730Z] 29663.75 IOPS, 115.87 MiB/s 00:12:51.603 Latency(us) 00:12:51.603 [2024-11-26T22:57:30.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.603 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:51.603 xnvme_bdev : 5.00 30669.02 119.80 0.00 0.00 2082.40 393.85 10082.46 00:12:51.603 [2024-11-26T22:57:30.730Z] =================================================================================================================== 00:12:51.603 [2024-11-26T22:57:30.730Z] Total : 30669.02 119.80 0.00 0.00 2082.40 393.85 10082.46 00:12:51.603 22:57:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:51.603 22:57:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:51.603 22:57:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:51.603 22:57:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:51.603 22:57:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 { 00:12:51.603 "subsystems": [ 00:12:51.603 { 00:12:51.603 "subsystem": "bdev", 00:12:51.603 "config": [ 00:12:51.603 { 00:12:51.603 "params": { 00:12:51.603 "io_mechanism": "libaio", 00:12:51.603 "conserve_cpu": false, 00:12:51.603 "filename": "/dev/nvme0n1", 00:12:51.603 "name": "xnvme_bdev" 00:12:51.603 }, 00:12:51.603 "method": "bdev_xnvme_create" 00:12:51.603 }, 00:12:51.603 { 00:12:51.603 "method": "bdev_wait_for_examine" 00:12:51.603 } 00:12:51.603 ] 00:12:51.603 } 00:12:51.603 ] 00:12:51.604 } 00:12:51.604 [2024-11-26 22:57:30.696204] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:12:51.604 [2024-11-26 22:57:30.696354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82465 ] 00:12:51.865 [2024-11-26 22:57:30.832859] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.865 [2024-11-26 22:57:30.855763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.865 [2024-11-26 22:57:30.884571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.128 Running I/O for 5 seconds... 00:12:54.016 34108.00 IOPS, 133.23 MiB/s [2024-11-26T22:57:34.088Z] 33201.50 IOPS, 129.69 MiB/s [2024-11-26T22:57:35.032Z] 33158.67 IOPS, 129.53 MiB/s [2024-11-26T22:57:36.420Z] 29840.50 IOPS, 116.56 MiB/s [2024-11-26T22:57:36.420Z] 24488.80 IOPS, 95.66 MiB/s 00:12:57.293 Latency(us) 00:12:57.293 [2024-11-26T22:57:36.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.293 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:57.293 xnvme_bdev : 5.02 24406.84 95.34 0.00 0.00 2613.38 83.50 42144.69 00:12:57.293 [2024-11-26T22:57:36.420Z] =================================================================================================================== 00:12:57.293 [2024-11-26T22:57:36.420Z] Total : 24406.84 95.34 0.00 0.00 2613.38 83.50 42144.69 00:12:57.293 00:12:57.293 real 0m11.191s 00:12:57.293 user 0m4.007s 00:12:57.293 sys 0m5.583s 00:12:57.293 22:57:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.293 ************************************ 00:12:57.293 END TEST xnvme_bdevperf 00:12:57.293 ************************************ 00:12:57.293 22:57:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.293 22:57:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:57.293 22:57:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.293 22:57:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.293 22:57:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.293 ************************************ 00:12:57.293 START TEST xnvme_fio_plugin 00:12:57.293 ************************************ 00:12:57.293 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:57.293 22:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:57.293 22:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:57.293 22:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:57.294 22:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:57.294 { 00:12:57.294 "subsystems": [ 00:12:57.294 { 00:12:57.294 "subsystem": "bdev", 00:12:57.294 "config": [ 00:12:57.294 { 00:12:57.294 "params": { 00:12:57.294 "io_mechanism": "libaio", 00:12:57.294 "conserve_cpu": false, 00:12:57.294 "filename": "/dev/nvme0n1", 00:12:57.294 "name": "xnvme_bdev" 00:12:57.294 }, 00:12:57.294 "method": "bdev_xnvme_create" 00:12:57.294 }, 00:12:57.294 { 00:12:57.294 "method": "bdev_wait_for_examine" 00:12:57.294 } 00:12:57.294 ] 00:12:57.294 } 00:12:57.294 ] 00:12:57.294 } 00:12:57.555 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:57.555 fio-3.35 00:12:57.555 Starting 1 thread 00:13:02.900 00:13:02.900 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82569: Tue Nov 26 22:57:41 2024 00:13:02.900 read: IOPS=34.0k, BW=133MiB/s (139MB/s)(664MiB/5002msec) 00:13:02.900 slat (usec): min=4, max=1685, avg=22.06, stdev=87.35 00:13:02.900 clat (usec): min=24, max=10979, avg=1332.99, stdev=627.85 00:13:02.900 lat (usec): min=84, max=10985, avg=1355.05, stdev=622.57 00:13:02.900 clat percentiles (usec): 00:13:02.900 | 1.00th=[ 265], 5.00th=[ 478], 10.00th=[ 635], 20.00th=[ 848], 00:13:02.900 | 30.00th=[ 1012], 40.00th=[ 1139], 50.00th=[ 1270], 60.00th=[ 1401], 00:13:02.900 | 70.00th=[ 1549], 80.00th=[ 1745], 90.00th=[ 2024], 95.00th=[ 2376], 00:13:02.900 | 99.00th=[ 3326], 99.50th=[ 3818], 99.90th=[ 5997], 99.95th=[ 6849], 00:13:02.900 | 99.99th=[ 8717] 00:13:02.900 bw ( KiB/s): min=127200, max=149800, per=100.00%, avg=136458.22, stdev=8155.83, samples=9 00:13:02.900 iops : min=31800, max=37450, avg=34114.56, stdev=2038.96, samples=9 00:13:02.900 lat (usec) : 50=0.01%, 100=0.01%, 250=0.81%, 500=4.69%, 750=9.42% 00:13:02.900 lat (usec) : 1000=14.48% 00:13:02.900 lat (msec) : 2=59.97%, 4=10.21%, 10=0.40%, 20=0.01% 00:13:02.900 cpu : usr=36.43%, sys=52.37%, ctx=10, majf=0, minf=773 00:13:02.900 IO depths : 1=0.2%, 2=0.6%, 4=1.9%, 8=6.6%, 16=22.8%, 32=65.6%, >=64=2.3% 00:13:02.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.900 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:02.900 issued rwts: total=169989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.900 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:02.900 00:13:02.900 Run status group 0 (all jobs): 00:13:02.900 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=664MiB (696MB), run=5002-5002msec 00:13:03.161 ----------------------------------------------------- 00:13:03.161 Suppressions used: 00:13:03.161 count bytes template 00:13:03.161 1 11 /usr/src/fio/parse.c 00:13:03.161 1 8 libtcmalloc_minimal.so 00:13:03.161 1 904 libcrypto.so 00:13:03.161 ----------------------------------------------------- 00:13:03.161 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.422 22:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.422 { 00:13:03.422 "subsystems": [ 00:13:03.422 { 00:13:03.422 "subsystem": "bdev", 00:13:03.422 "config": [ 00:13:03.422 { 00:13:03.422 "params": { 00:13:03.422 "io_mechanism": "libaio", 00:13:03.422 "conserve_cpu": false, 00:13:03.422 "filename": "/dev/nvme0n1", 00:13:03.422 "name": "xnvme_bdev" 00:13:03.422 }, 00:13:03.422 "method": "bdev_xnvme_create" 00:13:03.422 }, 00:13:03.422 { 00:13:03.422 "method": "bdev_wait_for_examine" 00:13:03.422 } 00:13:03.422 ] 00:13:03.422 } 00:13:03.422 ] 00:13:03.422 } 00:13:03.422 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:03.422 fio-3.35 00:13:03.422 Starting 1 thread 00:13:10.017 00:13:10.017 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82655: Tue Nov 26 22:57:47 2024 00:13:10.017 write: IOPS=24.5k, BW=95.8MiB/s (100MB/s)(480MiB/5007msec); 0 zone resets 00:13:10.017 slat (usec): min=4, max=1688, avg=18.70, stdev=70.38 00:13:10.017 clat (usec): min=11, max=27493, avg=2301.05, stdev=3088.73 00:13:10.017 lat (usec): min=61, max=27498, avg=2319.75, stdev=3086.83 00:13:10.017 clat percentiles (usec): 00:13:10.017 | 1.00th=[ 133], 5.00th=[ 281], 10.00th=[ 396], 20.00th=[ 594], 00:13:10.017 | 30.00th=[ 725], 40.00th=[ 832], 50.00th=[ 963], 60.00th=[ 1156], 00:13:10.017 | 70.00th=[ 1450], 80.00th=[ 2606], 90.00th=[ 8094], 95.00th=[ 9765], 00:13:10.017 | 99.00th=[11731], 99.50th=[12387], 99.90th=[19530], 99.95th=[22938], 00:13:10.017 | 99.99th=[26084] 00:13:10.017 bw ( KiB/s): min=83000, max=113093, per=100.00%, avg=98158.90, stdev=12154.13, samples=10 00:13:10.017 iops : min=20750, max=28273, avg=24539.70, stdev=3038.50, samples=10 00:13:10.017 lat (usec) : 20=0.02%, 50=0.13%, 100=0.36%, 250=3.38%, 500=11.19% 00:13:10.017 lat (usec) : 750=17.48%, 1000=19.90% 00:13:10.017 lat (msec) : 2=25.51%, 4=4.01%, 10=13.71%, 20=4.22%, 50=0.10% 00:13:10.017 cpu : usr=57.37%, sys=29.46%, ctx=13, majf=0, minf=773 00:13:10.017 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=1.6%, 16=8.2%, 32=83.1%, >=64=6.6% 00:13:10.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.017 complete : 0=0.0%, 4=95.7%, 8=0.9%, 16=1.3%, 32=1.4%, 64=0.8%, >=64=0.0% 00:13:10.017 issued rwts: total=0,122775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:10.017 00:13:10.017 Run status group 0 (all jobs): 00:13:10.017 WRITE: bw=95.8MiB/s (100MB/s), 95.8MiB/s-95.8MiB/s (100MB/s-100MB/s), io=480MiB (503MB), run=5007-5007msec 00:13:10.017 ----------------------------------------------------- 00:13:10.017 Suppressions used: 00:13:10.017 count bytes template 00:13:10.017 1 11 /usr/src/fio/parse.c 00:13:10.018 1 8 libtcmalloc_minimal.so 00:13:10.018 1 904 libcrypto.so 00:13:10.018 ----------------------------------------------------- 00:13:10.018 00:13:10.018 00:13:10.018 real 0m12.114s 00:13:10.018 user 0m5.842s 00:13:10.018 sys 0m4.671s 00:13:10.018 22:57:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.018 22:57:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.018 ************************************ 00:13:10.018 END TEST xnvme_fio_plugin 00:13:10.018 ************************************ 00:13:10.018 22:57:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:10.018 22:57:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:10.018 22:57:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:10.018 22:57:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:10.018 22:57:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.018 22:57:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.018 22:57:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.018 ************************************ 00:13:10.018 START TEST xnvme_rpc 00:13:10.018 ************************************ 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82736 00:13:10.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82736 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82736 ']' 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.018 22:57:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:10.018 [2024-11-26 22:57:48.553131] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:10.018 [2024-11-26 22:57:48.553289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82736 ] 00:13:10.018 [2024-11-26 22:57:48.690989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:10.018 [2024-11-26 22:57:48.719918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.018 [2024-11-26 22:57:48.749021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.289 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.289 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:10.289 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:10.289 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.289 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 xnvme_bdev 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82736 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82736 ']' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82736 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82736 00:13:10.560 killing process with pid 82736 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82736' 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82736 00:13:10.560 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82736 00:13:10.823 00:13:10.823 real 0m1.435s 00:13:10.823 user 0m1.491s 00:13:10.823 sys 0m0.419s 00:13:10.823 ************************************ 00:13:10.823 END TEST xnvme_rpc 00:13:10.823 ************************************ 00:13:10.823 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.823 22:57:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.823 22:57:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:10.823 22:57:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.823 22:57:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.823 22:57:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.085 ************************************ 00:13:11.085 START TEST xnvme_bdevperf 00:13:11.085 ************************************ 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:11.085 22:57:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:11.085 { 00:13:11.085 "subsystems": [ 00:13:11.085 { 00:13:11.085 "subsystem": "bdev", 00:13:11.085 "config": [ 00:13:11.085 { 00:13:11.085 "params": { 00:13:11.085 "io_mechanism": "libaio", 00:13:11.085 "conserve_cpu": true, 00:13:11.085 "filename": "/dev/nvme0n1", 00:13:11.085 "name": "xnvme_bdev" 00:13:11.085 }, 00:13:11.085 "method": "bdev_xnvme_create" 00:13:11.085 }, 00:13:11.085 { 00:13:11.085 "method": "bdev_wait_for_examine" 00:13:11.085 } 00:13:11.085 ] 00:13:11.085 } 00:13:11.085 ] 00:13:11.085 } 00:13:11.085 [2024-11-26 22:57:50.033089] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:11.085 [2024-11-26 22:57:50.033218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82788 ] 00:13:11.085 [2024-11-26 22:57:50.169066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:11.085 [2024-11-26 22:57:50.195673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.347 [2024-11-26 22:57:50.219521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.347 Running I/O for 5 seconds... 00:13:13.235 30892.00 IOPS, 120.67 MiB/s [2024-11-26T22:57:53.750Z] 33405.00 IOPS, 130.49 MiB/s [2024-11-26T22:57:54.694Z] 32904.00 IOPS, 128.53 MiB/s [2024-11-26T22:57:55.638Z] 32651.50 IOPS, 127.54 MiB/s [2024-11-26T22:57:55.638Z] 32940.00 IOPS, 128.67 MiB/s 00:13:16.511 Latency(us) 00:13:16.511 [2024-11-26T22:57:55.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.511 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:16.511 xnvme_bdev : 5.01 32896.71 128.50 0.00 0.00 1941.17 305.62 11594.83 00:13:16.511 [2024-11-26T22:57:55.638Z] =================================================================================================================== 00:13:16.511 [2024-11-26T22:57:55.638Z] Total : 32896.71 128.50 0.00 0.00 1941.17 305.62 11594.83 00:13:16.511 22:57:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:16.511 22:57:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:16.511 22:57:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:16.511 22:57:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:16.511 22:57:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.511 { 00:13:16.511 "subsystems": [ 00:13:16.511 { 00:13:16.511 "subsystem": "bdev", 00:13:16.511 "config": [ 00:13:16.511 { 00:13:16.511 "params": { 00:13:16.511 "io_mechanism": "libaio", 00:13:16.511 "conserve_cpu": true, 00:13:16.511 "filename": "/dev/nvme0n1", 00:13:16.511 "name": "xnvme_bdev" 00:13:16.511 }, 00:13:16.511 "method": "bdev_xnvme_create" 00:13:16.511 }, 00:13:16.511 { 00:13:16.511 "method": "bdev_wait_for_examine" 00:13:16.511 } 00:13:16.511 ] 00:13:16.511 } 00:13:16.511 ] 00:13:16.511 } 00:13:16.511 [2024-11-26 22:57:55.616047] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:16.511 [2024-11-26 22:57:55.616162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82858 ] 00:13:16.770 [2024-11-26 22:57:55.747753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:16.770 [2024-11-26 22:57:55.777935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.770 [2024-11-26 22:57:55.796777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.770 Running I/O for 5 seconds... 00:13:19.096 39132.00 IOPS, 152.86 MiB/s [2024-11-26T22:57:59.205Z] 36834.50 IOPS, 143.88 MiB/s [2024-11-26T22:58:00.149Z] 27946.67 IOPS, 109.17 MiB/s [2024-11-26T22:58:01.093Z] 22187.50 IOPS, 86.67 MiB/s [2024-11-26T22:58:01.093Z] 18708.80 IOPS, 73.08 MiB/s 00:13:21.966 Latency(us) 00:13:21.966 [2024-11-26T22:58:01.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.966 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:21.966 xnvme_bdev : 5.01 18668.39 72.92 0.00 0.00 3418.83 55.93 31053.98 00:13:21.966 [2024-11-26T22:58:01.093Z] =================================================================================================================== 00:13:21.966 [2024-11-26T22:58:01.093Z] Total : 18668.39 72.92 0.00 0.00 3418.83 55.93 31053.98 00:13:22.228 00:13:22.228 real 0m11.155s 00:13:22.228 user 0m5.083s 00:13:22.228 sys 0m4.596s 00:13:22.228 ************************************ 00:13:22.228 22:58:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.228 22:58:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:22.228 END TEST xnvme_bdevperf 00:13:22.228 ************************************ 00:13:22.228 22:58:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:22.228 22:58:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.228 22:58:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.228 22:58:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.228 ************************************ 00:13:22.228 START TEST xnvme_fio_plugin 00:13:22.228 ************************************ 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.228 22:58:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.228 { 00:13:22.228 "subsystems": [ 00:13:22.228 { 00:13:22.228 "subsystem": "bdev", 00:13:22.228 "config": [ 00:13:22.228 { 00:13:22.228 "params": { 00:13:22.228 "io_mechanism": "libaio", 00:13:22.228 "conserve_cpu": true, 00:13:22.228 "filename": "/dev/nvme0n1", 00:13:22.228 "name": "xnvme_bdev" 00:13:22.228 }, 00:13:22.228 "method": "bdev_xnvme_create" 00:13:22.228 }, 00:13:22.228 { 00:13:22.228 "method": "bdev_wait_for_examine" 00:13:22.228 } 00:13:22.228 ] 00:13:22.228 } 00:13:22.228 ] 00:13:22.228 } 00:13:22.490 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:22.490 fio-3.35 00:13:22.490 Starting 1 thread 00:13:27.788 00:13:27.788 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82966: Tue Nov 26 22:58:06 2024 00:13:27.788 read: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5002msec) 00:13:27.788 slat (usec): min=4, max=3370, avg=22.09, stdev=94.37 00:13:27.788 clat (usec): min=105, max=5046, avg=1303.36, stdev=520.35 00:13:27.788 lat (usec): min=187, max=5212, avg=1325.45, stdev=512.47 00:13:27.788 clat percentiles (usec): 00:13:27.788 | 1.00th=[ 285], 5.00th=[ 529], 10.00th=[ 676], 20.00th=[ 889], 00:13:27.788 | 30.00th=[ 1029], 40.00th=[ 1156], 50.00th=[ 1270], 60.00th=[ 1385], 00:13:27.788 | 70.00th=[ 1516], 80.00th=[ 1663], 90.00th=[ 1909], 95.00th=[ 2180], 00:13:27.788 | 99.00th=[ 2966], 99.50th=[ 3294], 99.90th=[ 3884], 99.95th=[ 4015], 00:13:27.788 | 99.99th=[ 4686] 00:13:27.788 bw ( KiB/s): min=124400, max=148336, per=100.00%, avg=136181.56, stdev=7235.12, samples=9 00:13:27.788 iops : min=31100, max=37084, avg=34045.33, stdev=1808.75, samples=9 00:13:27.788 lat (usec) : 250=0.67%, 500=3.62%, 750=8.41%, 1000=14.85% 00:13:27.788 lat (msec) : 2=64.79%, 4=7.61%, 10=0.05% 00:13:27.788 cpu : usr=38.63%, sys=52.47%, ctx=9, majf=0, minf=773 00:13:27.788 IO depths : 1=0.5%, 2=1.1%, 4=3.0%, 8=8.5%, 16=23.3%, 32=61.6%, >=64=2.1% 00:13:27.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.788 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:27.788 issued rwts: total=168948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.788 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.788 00:13:27.788 Run status group 0 (all jobs): 00:13:27.788 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (692MB), run=5002-5002msec 00:13:28.362 ----------------------------------------------------- 00:13:28.362 Suppressions used: 00:13:28.362 count bytes template 00:13:28.362 1 11 /usr/src/fio/parse.c 00:13:28.362 1 8 libtcmalloc_minimal.so 00:13:28.362 1 904 libcrypto.so 00:13:28.362 ----------------------------------------------------- 00:13:28.362 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:28.362 22:58:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.362 { 00:13:28.362 "subsystems": [ 00:13:28.362 { 00:13:28.362 "subsystem": "bdev", 00:13:28.362 "config": [ 00:13:28.362 { 00:13:28.362 "params": { 00:13:28.362 "io_mechanism": "libaio", 00:13:28.362 "conserve_cpu": true, 00:13:28.362 "filename": "/dev/nvme0n1", 00:13:28.362 "name": "xnvme_bdev" 00:13:28.362 }, 00:13:28.362 "method": "bdev_xnvme_create" 00:13:28.362 }, 00:13:28.362 { 00:13:28.362 "method": "bdev_wait_for_examine" 00:13:28.362 } 00:13:28.362 ] 00:13:28.362 } 00:13:28.362 ] 00:13:28.362 } 00:13:28.362 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:28.362 fio-3.35 00:13:28.362 Starting 1 thread 00:13:34.948 00:13:34.948 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83052: Tue Nov 26 22:58:12 2024 00:13:34.948 write: IOPS=31.1k, BW=122MiB/s (128MB/s)(609MiB/5008msec); 0 zone resets 00:13:34.948 slat (usec): min=4, max=1846, avg=20.69, stdev=80.63 00:13:34.948 clat (usec): min=8, max=22213, avg=1533.39, stdev=1753.75 00:13:34.948 lat (usec): min=66, max=22217, avg=1554.08, stdev=1750.34 00:13:34.948 clat percentiles (usec): 00:13:34.948 | 1.00th=[ 208], 5.00th=[ 371], 10.00th=[ 515], 20.00th=[ 725], 00:13:34.948 | 30.00th=[ 881], 40.00th=[ 1029], 50.00th=[ 1172], 60.00th=[ 1319], 00:13:34.948 | 70.00th=[ 1500], 80.00th=[ 1696], 90.00th=[ 2089], 95.00th=[ 3195], 00:13:34.948 | 99.00th=[10552], 99.50th=[11338], 99.90th=[12911], 99.95th=[13698], 00:13:34.948 | 99.99th=[20055] 00:13:34.948 bw ( KiB/s): min=91712, max=143992, per=100.00%, avg=124638.30, stdev=20163.99, samples=10 00:13:34.948 iops : min=22928, max=35998, avg=31159.50, stdev=5040.99, samples=10 00:13:34.948 lat (usec) : 10=0.01%, 20=0.01%, 50=0.03%, 100=0.09%, 250=1.54% 00:13:34.948 lat (usec) : 500=7.90%, 750=12.12%, 1000=16.39% 00:13:34.948 lat (msec) : 2=50.62%, 4=6.77%, 10=3.16%, 20=1.37%, 50=0.01% 00:13:34.948 cpu : usr=46.10%, sys=43.24%, ctx=11, majf=0, minf=773 00:13:34.948 IO depths : 1=0.3%, 2=0.8%, 4=2.5%, 8=7.4%, 16=20.4%, 32=65.2%, >=64=3.4% 00:13:34.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.948 complete : 0=0.0%, 4=97.4%, 8=0.3%, 16=0.4%, 32=0.5%, 64=1.4%, >=64=0.0% 00:13:34.948 issued rwts: total=0,155898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.948 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:34.948 00:13:34.948 Run status group 0 (all jobs): 00:13:34.948 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=609MiB (639MB), run=5008-5008msec 00:13:34.948 ----------------------------------------------------- 00:13:34.948 Suppressions used: 00:13:34.948 count bytes template 00:13:34.948 1 11 /usr/src/fio/parse.c 00:13:34.948 1 8 libtcmalloc_minimal.so 00:13:34.948 1 904 libcrypto.so 00:13:34.948 ----------------------------------------------------- 00:13:34.948 00:13:34.948 00:13:34.948 real 0m12.141s 00:13:34.948 user 0m5.399s 00:13:34.948 sys 0m5.394s 00:13:34.948 22:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.948 ************************************ 00:13:34.948 END TEST xnvme_fio_plugin 00:13:34.948 ************************************ 00:13:34.948 22:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:34.948 22:58:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:34.948 22:58:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:34.948 22:58:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.948 22:58:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.948 ************************************ 00:13:34.948 START TEST xnvme_rpc 00:13:34.948 ************************************ 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83127 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83127 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83127 ']' 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.949 22:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.949 [2024-11-26 22:58:13.507053] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:34.949 [2024-11-26 22:58:13.507230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83127 ] 00:13:34.949 [2024-11-26 22:58:13.649344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:34.949 [2024-11-26 22:58:13.678652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.949 [2024-11-26 22:58:13.707450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.210 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.210 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:35.210 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:35.210 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.210 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.474 xnvme_bdev 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.474 22:58:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83127 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83127 ']' 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83127 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83127 00:13:35.475 killing process with pid 83127 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83127' 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83127 00:13:35.475 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83127 00:13:35.736 00:13:35.736 real 0m1.401s 00:13:35.736 user 0m1.444s 00:13:35.736 sys 0m0.428s 00:13:35.736 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.736 22:58:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.736 ************************************ 00:13:35.736 END TEST xnvme_rpc 00:13:35.736 ************************************ 00:13:35.736 22:58:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:35.736 22:58:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.736 22:58:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.736 22:58:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.998 ************************************ 00:13:35.998 START TEST xnvme_bdevperf 00:13:35.998 ************************************ 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:35.998 22:58:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:35.998 { 00:13:35.998 "subsystems": [ 00:13:35.998 { 00:13:35.998 "subsystem": "bdev", 00:13:35.998 "config": [ 00:13:35.998 { 00:13:35.998 "params": { 00:13:35.998 "io_mechanism": "io_uring", 00:13:35.998 "conserve_cpu": false, 00:13:35.998 "filename": "/dev/nvme0n1", 00:13:35.998 "name": "xnvme_bdev" 00:13:35.998 }, 00:13:35.998 "method": "bdev_xnvme_create" 00:13:35.998 }, 00:13:35.998 { 00:13:35.998 "method": "bdev_wait_for_examine" 00:13:35.998 } 00:13:35.998 ] 00:13:35.998 } 00:13:35.998 ] 00:13:35.998 } 00:13:35.998 [2024-11-26 22:58:14.957652] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:35.998 [2024-11-26 22:58:14.957806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83185 ] 00:13:35.998 [2024-11-26 22:58:15.098017] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:36.260 [2024-11-26 22:58:15.126928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.260 [2024-11-26 22:58:15.154547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.260 Running I/O for 5 seconds... 00:13:38.145 34454.00 IOPS, 134.59 MiB/s [2024-11-26T22:58:18.660Z] 34684.00 IOPS, 135.48 MiB/s [2024-11-26T22:58:19.628Z] 34662.00 IOPS, 135.40 MiB/s [2024-11-26T22:58:20.571Z] 34523.25 IOPS, 134.86 MiB/s [2024-11-26T22:58:20.571Z] 34571.40 IOPS, 135.04 MiB/s 00:13:41.444 Latency(us) 00:13:41.444 [2024-11-26T22:58:20.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.444 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:41.444 xnvme_bdev : 5.00 34551.30 134.97 0.00 0.00 1847.81 87.04 21273.99 00:13:41.444 [2024-11-26T22:58:20.571Z] =================================================================================================================== 00:13:41.444 [2024-11-26T22:58:20.571Z] Total : 34551.30 134.97 0.00 0.00 1847.81 87.04 21273.99 00:13:41.444 22:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:41.444 22:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:41.444 22:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:41.444 22:58:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:41.444 22:58:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 { 00:13:41.444 "subsystems": [ 00:13:41.444 { 00:13:41.444 "subsystem": "bdev", 00:13:41.444 "config": [ 00:13:41.444 { 00:13:41.444 "params": { 00:13:41.444 "io_mechanism": "io_uring", 00:13:41.444 "conserve_cpu": false, 00:13:41.444 "filename": "/dev/nvme0n1", 00:13:41.444 "name": "xnvme_bdev" 00:13:41.444 }, 00:13:41.444 "method": "bdev_xnvme_create" 00:13:41.444 }, 00:13:41.444 { 00:13:41.444 "method": "bdev_wait_for_examine" 00:13:41.444 } 00:13:41.444 ] 00:13:41.444 } 00:13:41.444 ] 00:13:41.444 } 00:13:41.444 [2024-11-26 22:58:20.527243] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:41.444 [2024-11-26 22:58:20.527427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83249 ] 00:13:41.705 [2024-11-26 22:58:20.667014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:41.705 [2024-11-26 22:58:20.694469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.705 [2024-11-26 22:58:20.724312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.705 Running I/O for 5 seconds... 00:13:44.039 6826.00 IOPS, 26.66 MiB/s [2024-11-26T22:58:24.112Z] 6786.00 IOPS, 26.51 MiB/s [2024-11-26T22:58:25.058Z] 6893.67 IOPS, 26.93 MiB/s [2024-11-26T22:58:25.998Z] 6889.50 IOPS, 26.91 MiB/s [2024-11-26T22:58:25.998Z] 7363.60 IOPS, 28.76 MiB/s 00:13:46.871 Latency(us) 00:13:46.871 [2024-11-26T22:58:25.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.871 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:46.871 xnvme_bdev : 5.01 7366.75 28.78 0.00 0.00 8676.24 75.22 31255.63 00:13:46.871 [2024-11-26T22:58:25.998Z] =================================================================================================================== 00:13:46.871 [2024-11-26T22:58:25.998Z] Total : 7366.75 28.78 0.00 0.00 8676.24 75.22 31255.63 00:13:46.871 00:13:46.871 real 0m11.100s 00:13:46.871 user 0m4.014s 00:13:46.871 sys 0m6.844s 00:13:46.871 ************************************ 00:13:46.871 END TEST xnvme_bdevperf 00:13:46.871 ************************************ 00:13:46.871 22:58:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.872 22:58:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.130 22:58:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:47.130 22:58:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.130 22:58:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.130 22:58:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.130 ************************************ 00:13:47.130 START TEST xnvme_fio_plugin 00:13:47.130 ************************************ 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:47.130 22:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.130 { 00:13:47.130 "subsystems": [ 00:13:47.130 { 00:13:47.130 "subsystem": "bdev", 00:13:47.130 "config": [ 00:13:47.130 { 00:13:47.130 "params": { 00:13:47.130 "io_mechanism": "io_uring", 00:13:47.130 "conserve_cpu": false, 00:13:47.130 "filename": "/dev/nvme0n1", 00:13:47.130 "name": "xnvme_bdev" 00:13:47.130 }, 00:13:47.130 "method": "bdev_xnvme_create" 00:13:47.130 }, 00:13:47.130 { 00:13:47.130 "method": "bdev_wait_for_examine" 00:13:47.130 } 00:13:47.130 ] 00:13:47.130 } 00:13:47.130 ] 00:13:47.130 } 00:13:47.130 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:47.130 fio-3.35 00:13:47.130 Starting 1 thread 00:13:53.709 00:13:53.709 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83359: Tue Nov 26 22:58:31 2024 00:13:53.709 read: IOPS=37.8k, BW=148MiB/s (155MB/s)(738MiB/5005msec) 00:13:53.709 slat (nsec): min=2778, max=60432, avg=3650.64, stdev=1822.58 00:13:53.709 clat (usec): min=162, max=9712, avg=1546.85, stdev=339.69 00:13:53.709 lat (usec): min=165, max=9715, avg=1550.50, stdev=339.79 00:13:53.709 clat percentiles (usec): 00:13:53.709 | 1.00th=[ 857], 5.00th=[ 1029], 10.00th=[ 1188], 20.00th=[ 1319], 00:13:53.709 | 30.00th=[ 1401], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1582], 00:13:53.709 | 70.00th=[ 1647], 80.00th=[ 1762], 90.00th=[ 1926], 95.00th=[ 2114], 00:13:53.709 | 99.00th=[ 2540], 99.50th=[ 2802], 99.90th=[ 3884], 99.95th=[ 4752], 00:13:53.709 | 99.99th=[ 5997] 00:13:53.709 bw ( KiB/s): min=144384, max=182784, per=100.00%, avg=152032.00, stdev=11759.16, samples=9 00:13:53.709 iops : min=36096, max=45696, avg=38008.00, stdev=2939.79, samples=9 00:13:53.709 lat (usec) : 250=0.01%, 500=0.03%, 750=0.18%, 1000=4.06% 00:13:53.709 lat (msec) : 2=87.89%, 4=7.74%, 10=0.09% 00:13:53.709 cpu : usr=29.96%, sys=68.63%, ctx=13, majf=0, minf=771 00:13:53.709 IO depths : 1=1.4%, 2=2.9%, 4=6.0%, 8=12.2%, 16=24.8%, 32=51.1%, >=64=1.6% 00:13:53.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.709 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:53.709 issued rwts: total=189006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.709 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:53.709 00:13:53.709 Run status group 0 (all jobs): 00:13:53.709 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=738MiB (774MB), run=5005-5005msec 00:13:53.709 ----------------------------------------------------- 00:13:53.709 Suppressions used: 00:13:53.709 count bytes template 00:13:53.709 1 11 /usr/src/fio/parse.c 00:13:53.709 1 8 libtcmalloc_minimal.so 00:13:53.709 1 904 libcrypto.so 00:13:53.709 ----------------------------------------------------- 00:13:53.709 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:53.709 22:58:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.709 { 00:13:53.709 "subsystems": [ 00:13:53.709 { 00:13:53.709 "subsystem": "bdev", 00:13:53.709 "config": [ 00:13:53.709 { 00:13:53.710 "params": { 00:13:53.710 "io_mechanism": "io_uring", 00:13:53.710 "conserve_cpu": false, 00:13:53.710 "filename": "/dev/nvme0n1", 00:13:53.710 "name": "xnvme_bdev" 00:13:53.710 }, 00:13:53.710 "method": "bdev_xnvme_create" 00:13:53.710 }, 00:13:53.710 { 00:13:53.710 "method": "bdev_wait_for_examine" 00:13:53.710 } 00:13:53.710 ] 00:13:53.710 } 00:13:53.710 ] 00:13:53.710 } 00:13:53.710 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:53.710 fio-3.35 00:13:53.710 Starting 1 thread 00:13:59.091 00:13:59.091 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83440: Tue Nov 26 22:58:37 2024 00:13:59.091 write: IOPS=39.6k, BW=155MiB/s (162MB/s)(776MiB/5009msec); 0 zone resets 00:13:59.091 slat (nsec): min=2146, max=87803, avg=3786.86, stdev=1877.71 00:13:59.091 clat (usec): min=60, max=20179, avg=1486.18, stdev=1924.52 00:13:59.091 lat (usec): min=64, max=20183, avg=1489.97, stdev=1924.65 00:13:59.091 clat percentiles (usec): 00:13:59.091 | 1.00th=[ 318], 5.00th=[ 693], 10.00th=[ 807], 20.00th=[ 906], 00:13:59.091 | 30.00th=[ 979], 40.00th=[ 1029], 50.00th=[ 1074], 60.00th=[ 1106], 00:13:59.091 | 70.00th=[ 1156], 80.00th=[ 1221], 90.00th=[ 1434], 95.00th=[ 5145], 00:13:59.091 | 99.00th=[10683], 99.50th=[15008], 99.90th=[18220], 99.95th=[18744], 00:13:59.091 | 99.99th=[19530] 00:13:59.091 bw ( KiB/s): min=44736, max=226848, per=100.00%, avg=158812.80, stdev=72080.73, samples=10 00:13:59.091 iops : min=11184, max=56712, avg=39703.20, stdev=18020.18, samples=10 00:13:59.091 lat (usec) : 100=0.05%, 250=0.54%, 500=1.67%, 750=4.85%, 1000=27.09% 00:13:59.091 lat (msec) : 2=58.74%, 4=1.20%, 10=4.70%, 20=1.18%, 50=0.01% 00:13:59.091 cpu : usr=32.73%, sys=66.19%, ctx=65, majf=0, minf=771 00:13:59.091 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.3%, 16=21.4%, 32=55.7%, >=64=3.6% 00:13:59.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.091 complete : 0=0.0%, 4=97.8%, 8=0.3%, 16=0.3%, 32=0.2%, 64=1.3%, >=64=0.0% 00:13:59.091 issued rwts: total=0,198578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:59.091 00:13:59.091 Run status group 0 (all jobs): 00:13:59.091 WRITE: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=776MiB (813MB), run=5009-5009msec 00:13:59.091 ----------------------------------------------------- 00:13:59.091 Suppressions used: 00:13:59.091 count bytes template 00:13:59.091 1 11 /usr/src/fio/parse.c 00:13:59.091 1 8 libtcmalloc_minimal.so 00:13:59.091 1 904 libcrypto.so 00:13:59.091 ----------------------------------------------------- 00:13:59.091 00:13:59.091 00:13:59.091 real 0m11.923s 00:13:59.091 user 0m4.258s 00:13:59.091 sys 0m7.228s 00:13:59.091 22:58:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.091 22:58:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 ************************************ 00:13:59.091 END TEST xnvme_fio_plugin 00:13:59.091 ************************************ 00:13:59.091 22:58:38 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:59.091 22:58:38 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:59.091 22:58:38 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:59.091 22:58:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:59.091 22:58:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.091 22:58:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.091 22:58:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 ************************************ 00:13:59.091 START TEST xnvme_rpc 00:13:59.091 ************************************ 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:59.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83519 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83519 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83519 ']' 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.091 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 [2024-11-26 22:58:38.099271] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:13:59.091 [2024-11-26 22:58:38.099410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83519 ] 00:13:59.352 [2024-11-26 22:58:38.237037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:59.352 [2024-11-26 22:58:38.267851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.352 [2024-11-26 22:58:38.297017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.920 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.920 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:59.920 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:59.920 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.920 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.920 xnvme_bdev 00:13:59.920 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.920 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.921 22:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.921 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.921 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:59.921 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:59.921 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.921 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:59.921 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.921 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83519 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83519 ']' 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83519 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83519 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83519' 00:14:00.181 killing process with pid 83519 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83519 00:14:00.181 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83519 00:14:00.442 ************************************ 00:14:00.442 END TEST xnvme_rpc 00:14:00.442 ************************************ 00:14:00.442 00:14:00.442 real 0m1.397s 00:14:00.442 user 0m1.472s 00:14:00.442 sys 0m0.400s 00:14:00.442 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.442 22:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.442 22:58:39 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:00.442 22:58:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.442 22:58:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.442 22:58:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.442 ************************************ 00:14:00.442 START TEST xnvme_bdevperf 00:14:00.442 ************************************ 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.442 22:58:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.442 { 00:14:00.442 "subsystems": [ 00:14:00.442 { 00:14:00.442 "subsystem": "bdev", 00:14:00.442 "config": [ 00:14:00.442 { 00:14:00.442 "params": { 00:14:00.442 "io_mechanism": "io_uring", 00:14:00.442 "conserve_cpu": true, 00:14:00.442 "filename": "/dev/nvme0n1", 00:14:00.442 "name": "xnvme_bdev" 00:14:00.442 }, 00:14:00.442 "method": "bdev_xnvme_create" 00:14:00.442 }, 00:14:00.442 { 00:14:00.442 "method": "bdev_wait_for_examine" 00:14:00.442 } 00:14:00.442 ] 00:14:00.442 } 00:14:00.442 ] 00:14:00.442 } 00:14:00.442 [2024-11-26 22:58:39.549571] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:00.442 [2024-11-26 22:58:39.550241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83575 ] 00:14:00.703 [2024-11-26 22:58:39.686444] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:00.703 [2024-11-26 22:58:39.717896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.703 [2024-11-26 22:58:39.746352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.963 Running I/O for 5 seconds... 00:14:02.849 39410.00 IOPS, 153.95 MiB/s [2024-11-26T22:58:42.921Z] 39411.00 IOPS, 153.95 MiB/s [2024-11-26T22:58:43.867Z] 38320.00 IOPS, 149.69 MiB/s [2024-11-26T22:58:45.255Z] 37573.25 IOPS, 146.77 MiB/s [2024-11-26T22:58:45.255Z] 37368.40 IOPS, 145.97 MiB/s 00:14:06.128 Latency(us) 00:14:06.128 [2024-11-26T22:58:45.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.128 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:06.128 xnvme_bdev : 5.00 37338.90 145.86 0.00 0.00 1709.23 289.87 17946.78 00:14:06.128 [2024-11-26T22:58:45.255Z] =================================================================================================================== 00:14:06.128 [2024-11-26T22:58:45.255Z] Total : 37338.90 145.86 0.00 0.00 1709.23 289.87 17946.78 00:14:06.128 22:58:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:06.128 22:58:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:06.128 22:58:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:06.128 22:58:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:06.128 22:58:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:06.128 { 00:14:06.128 "subsystems": [ 00:14:06.128 { 00:14:06.128 "subsystem": "bdev", 00:14:06.128 "config": [ 00:14:06.128 { 00:14:06.128 "params": { 00:14:06.128 "io_mechanism": "io_uring", 00:14:06.128 "conserve_cpu": true, 00:14:06.128 "filename": "/dev/nvme0n1", 00:14:06.128 "name": "xnvme_bdev" 00:14:06.128 }, 00:14:06.128 "method": "bdev_xnvme_create" 00:14:06.128 }, 00:14:06.128 { 00:14:06.128 "method": "bdev_wait_for_examine" 00:14:06.128 } 00:14:06.128 ] 00:14:06.128 } 00:14:06.128 ] 00:14:06.128 } 00:14:06.128 [2024-11-26 22:58:45.110498] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:06.129 [2024-11-26 22:58:45.110649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83642 ] 00:14:06.129 [2024-11-26 22:58:45.250339] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:06.390 [2024-11-26 22:58:45.279011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.390 [2024-11-26 22:58:45.307884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.390 Running I/O for 5 seconds... 00:14:08.719 10423.00 IOPS, 40.71 MiB/s [2024-11-26T22:58:48.418Z] 10461.50 IOPS, 40.87 MiB/s [2024-11-26T22:58:49.806Z] 10577.00 IOPS, 41.32 MiB/s [2024-11-26T22:58:50.746Z] 10607.00 IOPS, 41.43 MiB/s [2024-11-26T22:58:50.746Z] 10602.60 IOPS, 41.42 MiB/s 00:14:11.619 Latency(us) 00:14:11.619 [2024-11-26T22:58:50.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.619 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:11.619 xnvme_bdev : 5.01 10594.44 41.38 0.00 0.00 6030.70 74.04 26214.40 00:14:11.619 [2024-11-26T22:58:50.746Z] =================================================================================================================== 00:14:11.619 [2024-11-26T22:58:50.746Z] Total : 10594.44 41.38 0.00 0.00 6030.70 74.04 26214.40 00:14:11.619 ************************************ 00:14:11.619 END TEST xnvme_bdevperf 00:14:11.619 ************************************ 00:14:11.619 00:14:11.619 real 0m11.096s 00:14:11.619 user 0m7.529s 00:14:11.619 sys 0m2.638s 00:14:11.619 22:58:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.619 22:58:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.619 22:58:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:11.619 22:58:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.619 22:58:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.619 22:58:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.619 ************************************ 00:14:11.619 START TEST xnvme_fio_plugin 00:14:11.619 ************************************ 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:11.619 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.620 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:11.620 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:11.620 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:11.620 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:11.620 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:11.620 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:11.620 22:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.620 { 00:14:11.620 "subsystems": [ 00:14:11.620 { 00:14:11.620 "subsystem": "bdev", 00:14:11.620 "config": [ 00:14:11.620 { 00:14:11.620 "params": { 00:14:11.620 "io_mechanism": "io_uring", 00:14:11.620 "conserve_cpu": true, 00:14:11.620 "filename": "/dev/nvme0n1", 00:14:11.620 "name": "xnvme_bdev" 00:14:11.620 }, 00:14:11.620 "method": "bdev_xnvme_create" 00:14:11.620 }, 00:14:11.620 { 00:14:11.620 "method": "bdev_wait_for_examine" 00:14:11.620 } 00:14:11.620 ] 00:14:11.620 } 00:14:11.620 ] 00:14:11.620 } 00:14:11.882 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:11.882 fio-3.35 00:14:11.882 Starting 1 thread 00:14:17.178 00:14:17.178 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83749: Tue Nov 26 22:58:56 2024 00:14:17.178 read: IOPS=34.6k, BW=135MiB/s (142MB/s)(676MiB/5001msec) 00:14:17.178 slat (nsec): min=2791, max=69288, avg=3681.09, stdev=2112.21 00:14:17.178 clat (usec): min=983, max=4067, avg=1699.55, stdev=285.32 00:14:17.178 lat (usec): min=986, max=4109, avg=1703.23, stdev=285.79 00:14:17.178 clat percentiles (usec): 00:14:17.178 | 1.00th=[ 1221], 5.00th=[ 1319], 10.00th=[ 1385], 20.00th=[ 1467], 00:14:17.178 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1647], 60.00th=[ 1729], 00:14:17.178 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2212], 00:14:17.178 | 99.00th=[ 2573], 99.50th=[ 2737], 99.90th=[ 3163], 99.95th=[ 3326], 00:14:17.178 | 99.99th=[ 3884] 00:14:17.178 bw ( KiB/s): min=131072, max=144384, per=100.00%, avg=139036.44, stdev=4658.29, samples=9 00:14:17.178 iops : min=32768, max=36096, avg=34759.11, stdev=1164.57, samples=9 00:14:17.178 lat (usec) : 1000=0.01% 00:14:17.178 lat (msec) : 2=86.47%, 4=13.53%, 10=0.01% 00:14:17.178 cpu : usr=51.16%, sys=44.62%, ctx=12, majf=0, minf=771 00:14:17.178 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:17.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.178 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:17.178 issued rwts: total=172992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.178 00:14:17.178 Run status group 0 (all jobs): 00:14:17.178 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=676MiB (709MB), run=5001-5001msec 00:14:17.751 ----------------------------------------------------- 00:14:17.751 Suppressions used: 00:14:17.751 count bytes template 00:14:17.751 1 11 /usr/src/fio/parse.c 00:14:17.751 1 8 libtcmalloc_minimal.so 00:14:17.751 1 904 libcrypto.so 00:14:17.751 ----------------------------------------------------- 00:14:17.751 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:17.751 22:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.751 { 00:14:17.751 "subsystems": [ 00:14:17.751 { 00:14:17.751 "subsystem": "bdev", 00:14:17.751 "config": [ 00:14:17.751 { 00:14:17.751 "params": { 00:14:17.751 "io_mechanism": "io_uring", 00:14:17.751 "conserve_cpu": true, 00:14:17.751 "filename": "/dev/nvme0n1", 00:14:17.751 "name": "xnvme_bdev" 00:14:17.751 }, 00:14:17.751 "method": "bdev_xnvme_create" 00:14:17.751 }, 00:14:17.751 { 00:14:17.751 "method": "bdev_wait_for_examine" 00:14:17.751 } 00:14:17.751 ] 00:14:17.751 } 00:14:17.751 ] 00:14:17.751 } 00:14:18.012 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:18.012 fio-3.35 00:14:18.012 Starting 1 thread 00:14:23.304 00:14:23.304 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83831: Tue Nov 26 22:59:02 2024 00:14:23.304 write: IOPS=36.3k, BW=142MiB/s (149MB/s)(708MiB/5001msec); 0 zone resets 00:14:23.304 slat (usec): min=2, max=108, avg= 4.11, stdev= 2.38 00:14:23.304 clat (usec): min=296, max=9412, avg=1595.49, stdev=252.27 00:14:23.304 lat (usec): min=305, max=9428, avg=1599.60, stdev=252.83 00:14:23.304 clat percentiles (usec): 00:14:23.304 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1401], 00:14:23.304 | 30.00th=[ 1467], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1614], 00:14:23.304 | 70.00th=[ 1680], 80.00th=[ 1762], 90.00th=[ 1893], 95.00th=[ 2008], 00:14:23.304 | 99.00th=[ 2343], 99.50th=[ 2474], 99.90th=[ 2868], 99.95th=[ 3163], 00:14:23.304 | 99.99th=[ 7832] 00:14:23.304 bw ( KiB/s): min=139216, max=147280, per=99.96%, avg=144966.22, stdev=2637.67, samples=9 00:14:23.304 iops : min=34804, max=36820, avg=36241.56, stdev=659.42, samples=9 00:14:23.304 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:14:23.304 lat (msec) : 2=94.62%, 4=5.32%, 10=0.03% 00:14:23.304 cpu : usr=46.30%, sys=49.08%, ctx=9, majf=0, minf=771 00:14:23.304 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:23.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.304 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:23.304 issued rwts: total=0,181320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:23.304 00:14:23.304 Run status group 0 (all jobs): 00:14:23.304 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=708MiB (743MB), run=5001-5001msec 00:14:23.876 ----------------------------------------------------- 00:14:23.876 Suppressions used: 00:14:23.876 count bytes template 00:14:23.876 1 11 /usr/src/fio/parse.c 00:14:23.876 1 8 libtcmalloc_minimal.so 00:14:23.876 1 904 libcrypto.so 00:14:23.876 ----------------------------------------------------- 00:14:23.876 00:14:23.876 00:14:23.876 real 0m12.091s 00:14:23.876 user 0m6.037s 00:14:23.876 sys 0m5.298s 00:14:23.876 22:59:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.876 22:59:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:23.876 ************************************ 00:14:23.876 END TEST xnvme_fio_plugin 00:14:23.876 ************************************ 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:23.876 22:59:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:23.876 22:59:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:23.876 22:59:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.876 22:59:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.876 ************************************ 00:14:23.876 START TEST xnvme_rpc 00:14:23.876 ************************************ 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:23.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83906 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83906 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83906 ']' 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.876 22:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.877 22:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:23.877 22:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.877 [2024-11-26 22:59:02.910267] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:23.877 [2024-11-26 22:59:02.910447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83906 ] 00:14:24.137 [2024-11-26 22:59:03.051782] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.137 [2024-11-26 22:59:03.082217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.137 [2024-11-26 22:59:03.112955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.709 xnvme_bdev 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.709 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83906 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83906 ']' 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83906 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83906 00:14:24.971 killing process with pid 83906 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83906' 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83906 00:14:24.971 22:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83906 00:14:25.233 ************************************ 00:14:25.233 END TEST xnvme_rpc 00:14:25.233 ************************************ 00:14:25.233 00:14:25.233 real 0m1.442s 00:14:25.233 user 0m1.519s 00:14:25.233 sys 0m0.427s 00:14:25.233 22:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.233 22:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.233 22:59:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:25.233 22:59:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.233 22:59:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.233 22:59:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.233 ************************************ 00:14:25.233 START TEST xnvme_bdevperf 00:14:25.233 ************************************ 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:25.233 22:59:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:25.494 { 00:14:25.494 "subsystems": [ 00:14:25.494 { 00:14:25.494 "subsystem": "bdev", 00:14:25.494 "config": [ 00:14:25.494 { 00:14:25.494 "params": { 00:14:25.494 "io_mechanism": "io_uring_cmd", 00:14:25.494 "conserve_cpu": false, 00:14:25.494 "filename": "/dev/ng0n1", 00:14:25.494 "name": "xnvme_bdev" 00:14:25.494 }, 00:14:25.494 "method": "bdev_xnvme_create" 00:14:25.494 }, 00:14:25.494 { 00:14:25.494 "method": "bdev_wait_for_examine" 00:14:25.494 } 00:14:25.494 ] 00:14:25.494 } 00:14:25.494 ] 00:14:25.494 } 00:14:25.494 [2024-11-26 22:59:04.396883] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:25.494 [2024-11-26 22:59:04.397214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83971 ] 00:14:25.494 [2024-11-26 22:59:04.537725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:25.494 [2024-11-26 22:59:04.566791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.494 [2024-11-26 22:59:04.595544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.756 Running I/O for 5 seconds... 00:14:27.645 37410.00 IOPS, 146.13 MiB/s [2024-11-26T22:59:07.715Z] 37193.00 IOPS, 145.29 MiB/s [2024-11-26T22:59:08.760Z] 37353.33 IOPS, 145.91 MiB/s [2024-11-26T22:59:10.143Z] 37264.75 IOPS, 145.57 MiB/s [2024-11-26T22:59:10.143Z] 37532.20 IOPS, 146.61 MiB/s 00:14:31.016 Latency(us) 00:14:31.016 [2024-11-26T22:59:10.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.016 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:31.016 xnvme_bdev : 5.00 37512.71 146.53 0.00 0.00 1701.95 321.38 15224.52 00:14:31.016 [2024-11-26T22:59:10.143Z] =================================================================================================================== 00:14:31.016 [2024-11-26T22:59:10.143Z] Total : 37512.71 146.53 0.00 0.00 1701.95 321.38 15224.52 00:14:31.016 22:59:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.016 22:59:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:31.016 22:59:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:31.016 22:59:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:31.016 22:59:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.016 { 00:14:31.016 "subsystems": [ 00:14:31.016 { 00:14:31.016 "subsystem": "bdev", 00:14:31.016 "config": [ 00:14:31.016 { 00:14:31.016 "params": { 00:14:31.016 "io_mechanism": "io_uring_cmd", 00:14:31.016 "conserve_cpu": false, 00:14:31.016 "filename": "/dev/ng0n1", 00:14:31.016 "name": "xnvme_bdev" 00:14:31.016 }, 00:14:31.016 "method": "bdev_xnvme_create" 00:14:31.016 }, 00:14:31.016 { 00:14:31.016 "method": "bdev_wait_for_examine" 00:14:31.016 } 00:14:31.016 ] 00:14:31.016 } 00:14:31.016 ] 00:14:31.016 } 00:14:31.016 [2024-11-26 22:59:09.951438] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:31.016 [2024-11-26 22:59:09.951696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84034 ] 00:14:31.016 [2024-11-26 22:59:10.084777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:31.016 [2024-11-26 22:59:10.116764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.016 [2024-11-26 22:59:10.134808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.278 Running I/O for 5 seconds... 00:14:33.170 18741.00 IOPS, 73.21 MiB/s [2024-11-26T22:59:13.241Z] 18416.50 IOPS, 71.94 MiB/s [2024-11-26T22:59:14.629Z] 21877.67 IOPS, 85.46 MiB/s [2024-11-26T22:59:15.573Z] 27269.50 IOPS, 106.52 MiB/s 00:14:36.446 Latency(us) 00:14:36.446 [2024-11-26T22:59:15.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.446 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:36.446 xnvme_bdev : 5.00 30105.78 117.60 0.00 0.00 2121.94 57.50 20064.10 00:14:36.446 [2024-11-26T22:59:15.573Z] =================================================================================================================== 00:14:36.446 [2024-11-26T22:59:15.573Z] Total : 30105.78 117.60 0.00 0.00 2121.94 57.50 20064.10 00:14:36.446 22:59:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.446 22:59:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:36.446 22:59:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:36.446 22:59:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:36.446 22:59:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:36.446 { 00:14:36.446 "subsystems": [ 00:14:36.446 { 00:14:36.446 "subsystem": "bdev", 00:14:36.446 "config": [ 00:14:36.446 { 00:14:36.446 "params": { 00:14:36.446 "io_mechanism": "io_uring_cmd", 00:14:36.446 "conserve_cpu": false, 00:14:36.446 "filename": "/dev/ng0n1", 00:14:36.446 "name": "xnvme_bdev" 00:14:36.446 }, 00:14:36.446 "method": "bdev_xnvme_create" 00:14:36.446 }, 00:14:36.446 { 00:14:36.446 "method": "bdev_wait_for_examine" 00:14:36.446 } 00:14:36.446 ] 00:14:36.446 } 00:14:36.446 ] 00:14:36.446 } 00:14:36.446 [2024-11-26 22:59:15.469961] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:36.446 [2024-11-26 22:59:15.470125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84097 ] 00:14:36.708 [2024-11-26 22:59:15.611207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:36.708 [2024-11-26 22:59:15.642088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.708 [2024-11-26 22:59:15.669550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.708 Running I/O for 5 seconds... 00:14:39.043 72256.00 IOPS, 282.25 MiB/s [2024-11-26T22:59:19.114Z] 72192.00 IOPS, 282.00 MiB/s [2024-11-26T22:59:20.074Z] 72106.67 IOPS, 281.67 MiB/s [2024-11-26T22:59:21.014Z] 74608.00 IOPS, 291.44 MiB/s 00:14:41.887 Latency(us) 00:14:41.887 [2024-11-26T22:59:21.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.887 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:41.887 xnvme_bdev : 5.00 78368.29 306.13 0.00 0.00 813.25 475.77 2621.44 00:14:41.887 [2024-11-26T22:59:21.014Z] =================================================================================================================== 00:14:41.887 [2024-11-26T22:59:21.014Z] Total : 78368.29 306.13 0.00 0.00 813.25 475.77 2621.44 00:14:41.887 22:59:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.887 22:59:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:41.887 22:59:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:41.887 22:59:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:41.887 22:59:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:41.887 { 00:14:41.887 "subsystems": [ 00:14:41.887 { 00:14:41.887 "subsystem": "bdev", 00:14:41.887 "config": [ 00:14:41.887 { 00:14:41.887 "params": { 00:14:41.887 "io_mechanism": "io_uring_cmd", 00:14:41.887 "conserve_cpu": false, 00:14:41.887 "filename": "/dev/ng0n1", 00:14:41.887 "name": "xnvme_bdev" 00:14:41.887 }, 00:14:41.887 "method": "bdev_xnvme_create" 00:14:41.887 }, 00:14:41.887 { 00:14:41.887 "method": "bdev_wait_for_examine" 00:14:41.887 } 00:14:41.887 ] 00:14:41.887 } 00:14:41.887 ] 00:14:41.887 } 00:14:41.887 [2024-11-26 22:59:20.968170] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:14:41.887 [2024-11-26 22:59:20.968480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84166 ] 00:14:42.148 [2024-11-26 22:59:21.102840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:42.148 [2024-11-26 22:59:21.129017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.149 [2024-11-26 22:59:21.144394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.149 Running I/O for 5 seconds... 00:14:44.475 136.00 IOPS, 0.53 MiB/s [2024-11-26T22:59:24.545Z] 137.00 IOPS, 0.54 MiB/s [2024-11-26T22:59:25.488Z] 1034.00 IOPS, 4.04 MiB/s [2024-11-26T22:59:26.431Z] 809.25 IOPS, 3.16 MiB/s [2024-11-26T22:59:26.692Z] 675.00 IOPS, 2.64 MiB/s 00:14:47.565 Latency(us) 00:14:47.565 [2024-11-26T22:59:26.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.565 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:47.565 xnvme_bdev : 5.48 627.94 2.45 0.00 0.00 97515.10 139.42 935652.43 00:14:47.565 [2024-11-26T22:59:26.692Z] =================================================================================================================== 00:14:47.565 [2024-11-26T22:59:26.692Z] Total : 627.94 2.45 0.00 0.00 97515.10 139.42 935652.43 00:14:47.827 00:14:47.827 real 0m22.484s 00:14:47.827 user 0m11.636s 00:14:47.827 sys 0m10.423s 00:14:47.827 22:59:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.827 ************************************ 00:14:47.827 END TEST xnvme_bdevperf 00:14:47.827 ************************************ 00:14:47.827 22:59:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.827 22:59:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:47.827 22:59:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:47.827 22:59:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.827 22:59:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.827 ************************************ 00:14:47.827 START TEST xnvme_fio_plugin 00:14:47.827 ************************************ 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:47.827 22:59:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:47.827 { 00:14:47.827 "subsystems": [ 00:14:47.827 { 00:14:47.827 "subsystem": "bdev", 00:14:47.827 "config": [ 00:14:47.827 { 00:14:47.827 "params": { 00:14:47.827 "io_mechanism": "io_uring_cmd", 00:14:47.827 "conserve_cpu": false, 00:14:47.827 "filename": "/dev/ng0n1", 00:14:47.827 "name": "xnvme_bdev" 00:14:47.827 }, 00:14:47.827 "method": "bdev_xnvme_create" 00:14:47.827 }, 00:14:47.827 { 00:14:47.827 "method": "bdev_wait_for_examine" 00:14:47.827 } 00:14:47.827 ] 00:14:47.827 } 00:14:47.827 ] 00:14:47.827 } 00:14:48.088 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:48.088 fio-3.35 00:14:48.088 Starting 1 thread 00:14:53.381 00:14:53.381 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84273: Tue Nov 26 22:59:32 2024 00:14:53.381 read: IOPS=40.1k, BW=157MiB/s (164MB/s)(783MiB/5001msec) 00:14:53.381 slat (nsec): min=2780, max=95282, avg=3078.89, stdev=1086.67 00:14:53.381 clat (usec): min=709, max=6948, avg=1475.66, stdev=356.06 00:14:53.381 lat (usec): min=712, max=6951, avg=1478.74, stdev=356.11 00:14:53.381 clat percentiles (usec): 00:14:53.381 | 1.00th=[ 947], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1188], 00:14:53.381 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1401], 60.00th=[ 1483], 00:14:53.381 | 70.00th=[ 1598], 80.00th=[ 1745], 90.00th=[ 1958], 95.00th=[ 2147], 00:14:53.381 | 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3425], 99.95th=[ 3621], 00:14:53.381 | 99.99th=[ 6915] 00:14:53.381 bw ( KiB/s): min=148992, max=166912, per=100.00%, avg=161450.67, stdev=5596.98, samples=9 00:14:53.381 iops : min=37248, max=41728, avg=40362.67, stdev=1399.25, samples=9 00:14:53.381 lat (usec) : 750=0.01%, 1000=2.47% 00:14:53.381 lat (msec) : 2=89.08%, 4=8.41%, 10=0.03% 00:14:53.381 cpu : usr=38.80%, sys=60.36%, ctx=9, majf=0, minf=771 00:14:53.381 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:53.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.381 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:53.381 issued rwts: total=200512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:53.381 00:14:53.381 Run status group 0 (all jobs): 00:14:53.381 READ: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=783MiB (821MB), run=5001-5001msec 00:14:53.954 ----------------------------------------------------- 00:14:53.954 Suppressions used: 00:14:53.954 count bytes template 00:14:53.954 1 11 /usr/src/fio/parse.c 00:14:53.954 1 8 libtcmalloc_minimal.so 00:14:53.954 1 904 libcrypto.so 00:14:53.954 ----------------------------------------------------- 00:14:53.954 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:53.954 { 00:14:53.954 "subsystems": [ 00:14:53.954 { 00:14:53.954 "subsystem": "bdev", 00:14:53.954 "config": [ 00:14:53.954 { 00:14:53.954 "params": { 00:14:53.954 "io_mechanism": "io_uring_cmd", 00:14:53.954 "conserve_cpu": false, 00:14:53.954 "filename": "/dev/ng0n1", 00:14:53.954 "name": "xnvme_bdev" 00:14:53.954 }, 00:14:53.954 "method": "bdev_xnvme_create" 00:14:53.954 }, 00:14:53.954 { 00:14:53.954 "method": "bdev_wait_for_examine" 00:14:53.954 } 00:14:53.954 ] 00:14:53.954 } 00:14:53.954 ] 00:14:53.954 } 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:53.954 22:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:54.215 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:54.215 fio-3.35 00:14:54.215 Starting 1 thread 00:14:59.506 00:14:59.506 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84353: Tue Nov 26 22:59:38 2024 00:14:59.506 write: IOPS=44.3k, BW=173MiB/s (181MB/s)(865MiB/5001msec); 0 zone resets 00:14:59.506 slat (nsec): min=2143, max=69161, avg=3309.03, stdev=1300.43 00:14:59.506 clat (usec): min=131, max=5511, avg=1320.96, stdev=329.89 00:14:59.506 lat (usec): min=134, max=5515, avg=1324.27, stdev=330.12 00:14:59.506 clat percentiles (usec): 00:14:59.506 | 1.00th=[ 766], 5.00th=[ 873], 10.00th=[ 947], 20.00th=[ 1057], 00:14:59.506 | 30.00th=[ 1139], 40.00th=[ 1205], 50.00th=[ 1270], 60.00th=[ 1352], 00:14:59.506 | 70.00th=[ 1450], 80.00th=[ 1582], 90.00th=[ 1745], 95.00th=[ 1893], 00:14:59.506 | 99.00th=[ 2278], 99.50th=[ 2442], 99.90th=[ 3294], 99.95th=[ 3589], 00:14:59.506 | 99.99th=[ 4686] 00:14:59.506 bw ( KiB/s): min=159984, max=204103, per=100.00%, avg=178161.67, stdev=16298.58, samples=9 00:14:59.506 iops : min=39996, max=51025, avg=44540.33, stdev=4074.50, samples=9 00:14:59.506 lat (usec) : 250=0.01%, 500=0.08%, 750=0.70%, 1000=13.78% 00:14:59.506 lat (msec) : 2=82.37%, 4=3.04%, 10=0.02% 00:14:59.506 cpu : usr=40.40%, sys=58.78%, ctx=16, majf=0, minf=771 00:14:59.506 IO depths : 1=1.4%, 2=2.9%, 4=5.9%, 8=11.9%, 16=24.2%, 32=51.9%, >=64=1.7% 00:14:59.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.506 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:59.506 issued rwts: total=0,221479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:59.506 00:14:59.506 Run status group 0 (all jobs): 00:14:59.506 WRITE: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=865MiB (907MB), run=5001-5001msec 00:15:00.079 ----------------------------------------------------- 00:15:00.079 Suppressions used: 00:15:00.079 count bytes template 00:15:00.079 1 11 /usr/src/fio/parse.c 00:15:00.079 1 8 libtcmalloc_minimal.so 00:15:00.079 1 904 libcrypto.so 00:15:00.079 ----------------------------------------------------- 00:15:00.079 00:15:00.079 00:15:00.079 real 0m12.095s 00:15:00.079 user 0m5.125s 00:15:00.079 sys 0m6.569s 00:15:00.079 22:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.079 ************************************ 00:15:00.079 END TEST xnvme_fio_plugin 00:15:00.079 ************************************ 00:15:00.079 22:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:00.079 22:59:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:00.079 22:59:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:00.079 22:59:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:00.079 22:59:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:00.079 22:59:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:00.079 22:59:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.079 22:59:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.079 ************************************ 00:15:00.079 START TEST xnvme_rpc 00:15:00.079 ************************************ 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84433 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84433 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84433 ']' 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.079 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:00.079 [2024-11-26 22:59:39.120621] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:00.079 [2024-11-26 22:59:39.120777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84433 ] 00:15:00.340 [2024-11-26 22:59:39.258425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.340 [2024-11-26 22:59:39.288376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.340 [2024-11-26 22:59:39.319405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.913 xnvme_bdev 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.913 22:59:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.913 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84433 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84433 ']' 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84433 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84433 00:15:01.174 killing process with pid 84433 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84433' 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84433 00:15:01.174 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84433 00:15:01.436 00:15:01.436 real 0m1.448s 00:15:01.436 user 0m1.513s 00:15:01.436 sys 0m0.430s 00:15:01.436 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.436 ************************************ 00:15:01.436 END TEST xnvme_rpc 00:15:01.436 ************************************ 00:15:01.436 22:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.436 22:59:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:01.436 22:59:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.436 22:59:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.436 22:59:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.436 ************************************ 00:15:01.436 START TEST xnvme_bdevperf 00:15:01.436 ************************************ 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:01.436 22:59:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.697 { 00:15:01.697 "subsystems": [ 00:15:01.697 { 00:15:01.697 "subsystem": "bdev", 00:15:01.697 "config": [ 00:15:01.697 { 00:15:01.697 "params": { 00:15:01.697 "io_mechanism": "io_uring_cmd", 00:15:01.697 "conserve_cpu": true, 00:15:01.697 "filename": "/dev/ng0n1", 00:15:01.697 "name": "xnvme_bdev" 00:15:01.697 }, 00:15:01.697 "method": "bdev_xnvme_create" 00:15:01.697 }, 00:15:01.697 { 00:15:01.697 "method": "bdev_wait_for_examine" 00:15:01.697 } 00:15:01.697 ] 00:15:01.697 } 00:15:01.697 ] 00:15:01.697 } 00:15:01.697 [2024-11-26 22:59:40.628791] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:01.697 [2024-11-26 22:59:40.629289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84490 ] 00:15:01.697 [2024-11-26 22:59:40.768753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:01.697 [2024-11-26 22:59:40.798477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.958 [2024-11-26 22:59:40.826873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.958 Running I/O for 5 seconds... 00:15:03.892 41216.00 IOPS, 161.00 MiB/s [2024-11-26T22:59:43.962Z] 39488.00 IOPS, 154.25 MiB/s [2024-11-26T22:59:45.349Z] 39360.00 IOPS, 153.75 MiB/s [2024-11-26T22:59:46.293Z] 39264.00 IOPS, 153.38 MiB/s [2024-11-26T22:59:46.293Z] 39385.60 IOPS, 153.85 MiB/s 00:15:07.166 Latency(us) 00:15:07.166 [2024-11-26T22:59:46.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.166 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:07.166 xnvme_bdev : 5.01 39359.77 153.75 0.00 0.00 1622.59 746.73 4285.05 00:15:07.166 [2024-11-26T22:59:46.293Z] =================================================================================================================== 00:15:07.166 [2024-11-26T22:59:46.293Z] Total : 39359.77 153.75 0.00 0.00 1622.59 746.73 4285.05 00:15:07.166 22:59:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.166 22:59:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:07.166 22:59:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:07.166 22:59:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:07.166 22:59:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:07.166 { 00:15:07.166 "subsystems": [ 00:15:07.166 { 00:15:07.166 "subsystem": "bdev", 00:15:07.166 "config": [ 00:15:07.166 { 00:15:07.166 "params": { 00:15:07.166 "io_mechanism": "io_uring_cmd", 00:15:07.166 "conserve_cpu": true, 00:15:07.166 "filename": "/dev/ng0n1", 00:15:07.166 "name": "xnvme_bdev" 00:15:07.166 }, 00:15:07.166 "method": "bdev_xnvme_create" 00:15:07.166 }, 00:15:07.166 { 00:15:07.166 "method": "bdev_wait_for_examine" 00:15:07.166 } 00:15:07.166 ] 00:15:07.166 } 00:15:07.166 ] 00:15:07.166 } 00:15:07.166 [2024-11-26 22:59:46.215584] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:07.166 [2024-11-26 22:59:46.215760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84553 ] 00:15:07.426 [2024-11-26 22:59:46.361725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:07.426 [2024-11-26 22:59:46.389233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.426 [2024-11-26 22:59:46.416734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.426 Running I/O for 5 seconds... 00:15:09.756 41623.00 IOPS, 162.59 MiB/s [2024-11-26T22:59:49.838Z] 41334.00 IOPS, 161.46 MiB/s [2024-11-26T22:59:50.782Z] 42755.00 IOPS, 167.01 MiB/s [2024-11-26T22:59:51.724Z] 42851.00 IOPS, 167.39 MiB/s [2024-11-26T22:59:51.724Z] 42303.20 IOPS, 165.25 MiB/s 00:15:12.597 Latency(us) 00:15:12.597 [2024-11-26T22:59:51.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.597 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:12.597 xnvme_bdev : 5.00 42296.46 165.22 0.00 0.00 1509.20 318.23 13107.20 00:15:12.597 [2024-11-26T22:59:51.724Z] =================================================================================================================== 00:15:12.597 [2024-11-26T22:59:51.724Z] Total : 42296.46 165.22 0.00 0.00 1509.20 318.23 13107.20 00:15:12.597 22:59:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:12.597 22:59:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:12.597 22:59:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:12.597 22:59:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:12.597 22:59:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:12.597 { 00:15:12.597 "subsystems": [ 00:15:12.597 { 00:15:12.597 "subsystem": "bdev", 00:15:12.597 "config": [ 00:15:12.597 { 00:15:12.597 "params": { 00:15:12.597 "io_mechanism": "io_uring_cmd", 00:15:12.597 "conserve_cpu": true, 00:15:12.597 "filename": "/dev/ng0n1", 00:15:12.597 "name": "xnvme_bdev" 00:15:12.597 }, 00:15:12.597 "method": "bdev_xnvme_create" 00:15:12.597 }, 00:15:12.597 { 00:15:12.597 "method": "bdev_wait_for_examine" 00:15:12.597 } 00:15:12.597 ] 00:15:12.597 } 00:15:12.597 ] 00:15:12.597 } 00:15:12.858 [2024-11-26 22:59:51.753291] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:12.858 [2024-11-26 22:59:51.753437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84622 ] 00:15:12.858 [2024-11-26 22:59:51.889253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:12.858 [2024-11-26 22:59:51.919840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.858 [2024-11-26 22:59:51.949056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.120 Running I/O for 5 seconds... 00:15:15.009 72384.00 IOPS, 282.75 MiB/s [2024-11-26T22:59:55.079Z] 73728.00 IOPS, 288.00 MiB/s [2024-11-26T22:59:56.463Z] 81002.67 IOPS, 316.42 MiB/s [2024-11-26T22:59:57.404Z] 84688.00 IOPS, 330.81 MiB/s [2024-11-26T22:59:57.404Z] 85772.80 IOPS, 335.05 MiB/s 00:15:18.277 Latency(us) 00:15:18.277 [2024-11-26T22:59:57.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.277 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:18.277 xnvme_bdev : 5.00 85734.49 334.90 0.00 0.00 743.12 316.65 2923.91 00:15:18.277 [2024-11-26T22:59:57.404Z] =================================================================================================================== 00:15:18.277 [2024-11-26T22:59:57.404Z] Total : 85734.49 334.90 0.00 0.00 743.12 316.65 2923.91 00:15:18.277 22:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:18.277 22:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:18.277 22:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:18.277 22:59:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:18.277 22:59:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:18.277 { 00:15:18.277 "subsystems": [ 00:15:18.277 { 00:15:18.277 "subsystem": "bdev", 00:15:18.277 "config": [ 00:15:18.277 { 00:15:18.277 "params": { 00:15:18.277 "io_mechanism": "io_uring_cmd", 00:15:18.277 "conserve_cpu": true, 00:15:18.277 "filename": "/dev/ng0n1", 00:15:18.277 "name": "xnvme_bdev" 00:15:18.277 }, 00:15:18.277 "method": "bdev_xnvme_create" 00:15:18.277 }, 00:15:18.277 { 00:15:18.277 "method": "bdev_wait_for_examine" 00:15:18.277 } 00:15:18.277 ] 00:15:18.277 } 00:15:18.277 ] 00:15:18.277 } 00:15:18.277 [2024-11-26 22:59:57.311256] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:18.277 [2024-11-26 22:59:57.311398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84685 ] 00:15:18.537 [2024-11-26 22:59:57.448934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:18.537 [2024-11-26 22:59:57.475982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.537 [2024-11-26 22:59:57.504817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.537 Running I/O for 5 seconds... 00:15:20.492 232.00 IOPS, 0.91 MiB/s [2024-11-26T23:00:01.005Z] 248.00 IOPS, 0.97 MiB/s [2024-11-26T23:00:01.946Z] 254.00 IOPS, 0.99 MiB/s [2024-11-26T23:00:02.889Z] 502.50 IOPS, 1.96 MiB/s [2024-11-26T23:00:02.889Z] 481.00 IOPS, 1.88 MiB/s 00:15:23.762 Latency(us) 00:15:23.762 [2024-11-26T23:00:02.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.762 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:23.762 xnvme_bdev : 5.17 477.22 1.86 0.00 0.00 131935.73 100.04 632371.99 00:15:23.762 [2024-11-26T23:00:02.889Z] =================================================================================================================== 00:15:23.762 [2024-11-26T23:00:02.889Z] Total : 477.22 1.86 0.00 0.00 131935.73 100.04 632371.99 00:15:24.021 00:15:24.021 real 0m22.408s 00:15:24.021 user 0m17.504s 00:15:24.021 sys 0m4.097s 00:15:24.021 23:00:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.021 ************************************ 00:15:24.021 END TEST xnvme_bdevperf 00:15:24.021 ************************************ 00:15:24.021 23:00:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:24.021 23:00:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:24.021 23:00:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:24.021 23:00:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.021 23:00:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.021 ************************************ 00:15:24.021 START TEST xnvme_fio_plugin 00:15:24.021 ************************************ 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:24.021 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:24.022 23:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:24.022 { 00:15:24.022 "subsystems": [ 00:15:24.022 { 00:15:24.022 "subsystem": "bdev", 00:15:24.022 "config": [ 00:15:24.022 { 00:15:24.022 "params": { 00:15:24.022 "io_mechanism": "io_uring_cmd", 00:15:24.022 "conserve_cpu": true, 00:15:24.022 "filename": "/dev/ng0n1", 00:15:24.022 "name": "xnvme_bdev" 00:15:24.022 }, 00:15:24.022 "method": "bdev_xnvme_create" 00:15:24.022 }, 00:15:24.022 { 00:15:24.022 "method": "bdev_wait_for_examine" 00:15:24.022 } 00:15:24.022 ] 00:15:24.022 } 00:15:24.022 ] 00:15:24.022 } 00:15:24.280 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:24.280 fio-3.35 00:15:24.280 Starting 1 thread 00:15:29.559 00:15:29.559 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84792: Tue Nov 26 23:00:08 2024 00:15:29.559 read: IOPS=39.4k, BW=154MiB/s (161MB/s)(770MiB/5001msec) 00:15:29.559 slat (usec): min=2, max=150, avg= 2.94, stdev= 1.10 00:15:29.559 clat (usec): min=619, max=3463, avg=1507.22, stdev=359.19 00:15:29.559 lat (usec): min=622, max=3492, avg=1510.16, stdev=359.28 00:15:29.559 clat percentiles (usec): 00:15:29.559 | 1.00th=[ 889], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1188], 00:15:29.559 | 30.00th=[ 1270], 40.00th=[ 1352], 50.00th=[ 1450], 60.00th=[ 1565], 00:15:29.559 | 70.00th=[ 1680], 80.00th=[ 1811], 90.00th=[ 1991], 95.00th=[ 2147], 00:15:29.559 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 2900], 99.95th=[ 3032], 00:15:29.559 | 99.99th=[ 3359] 00:15:29.559 bw ( KiB/s): min=150016, max=163840, per=98.58%, avg=155420.44, stdev=4292.19, samples=9 00:15:29.559 iops : min=37504, max=40960, avg=38855.11, stdev=1073.05, samples=9 00:15:29.559 lat (usec) : 750=0.09%, 1000=3.36% 00:15:29.559 lat (msec) : 2=86.96%, 4=9.59% 00:15:29.559 cpu : usr=86.86%, sys=10.98%, ctx=8, majf=0, minf=771 00:15:29.559 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:29.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.559 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:29.559 issued rwts: total=197120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.559 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:29.559 00:15:29.559 Run status group 0 (all jobs): 00:15:29.559 READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=770MiB (807MB), run=5001-5001msec 00:15:30.141 ----------------------------------------------------- 00:15:30.141 Suppressions used: 00:15:30.141 count bytes template 00:15:30.141 1 11 /usr/src/fio/parse.c 00:15:30.141 1 8 libtcmalloc_minimal.so 00:15:30.141 1 904 libcrypto.so 00:15:30.141 ----------------------------------------------------- 00:15:30.141 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:30.141 23:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:30.141 { 00:15:30.141 "subsystems": [ 00:15:30.141 { 00:15:30.141 "subsystem": "bdev", 00:15:30.141 "config": [ 00:15:30.141 { 00:15:30.141 "params": { 00:15:30.141 "io_mechanism": "io_uring_cmd", 00:15:30.141 "conserve_cpu": true, 00:15:30.141 "filename": "/dev/ng0n1", 00:15:30.141 "name": "xnvme_bdev" 00:15:30.141 }, 00:15:30.141 "method": "bdev_xnvme_create" 00:15:30.141 }, 00:15:30.141 { 00:15:30.141 "method": "bdev_wait_for_examine" 00:15:30.141 } 00:15:30.141 ] 00:15:30.141 } 00:15:30.141 ] 00:15:30.141 } 00:15:30.141 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:30.141 fio-3.35 00:15:30.141 Starting 1 thread 00:15:36.783 00:15:36.783 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84872: Tue Nov 26 23:00:14 2024 00:15:36.783 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(748MiB/5001msec); 0 zone resets 00:15:36.783 slat (usec): min=2, max=146, avg= 3.89, stdev= 2.20 00:15:36.783 clat (usec): min=746, max=6559, avg=1513.86, stdev=293.58 00:15:36.783 lat (usec): min=749, max=6567, avg=1517.75, stdev=294.04 00:15:36.783 clat percentiles (usec): 00:15:36.783 | 1.00th=[ 930], 5.00th=[ 1057], 10.00th=[ 1156], 20.00th=[ 1287], 00:15:36.783 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1565], 00:15:36.783 | 70.00th=[ 1647], 80.00th=[ 1729], 90.00th=[ 1876], 95.00th=[ 2008], 00:15:36.783 | 99.00th=[ 2311], 99.50th=[ 2507], 99.90th=[ 3130], 99.95th=[ 3326], 00:15:36.783 | 99.99th=[ 3785] 00:15:36.783 bw ( KiB/s): min=142408, max=169840, per=99.03%, avg=151739.56, stdev=8667.92, samples=9 00:15:36.783 iops : min=35602, max=42460, avg=37934.89, stdev=2166.98, samples=9 00:15:36.783 lat (usec) : 750=0.01%, 1000=2.65% 00:15:36.783 lat (msec) : 2=92.14%, 4=5.20%, 10=0.01% 00:15:36.783 cpu : usr=57.28%, sys=39.06%, ctx=14, majf=0, minf=771 00:15:36.783 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:15:36.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.783 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:36.783 issued rwts: total=0,191571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.783 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:36.783 00:15:36.783 Run status group 0 (all jobs): 00:15:36.783 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=748MiB (785MB), run=5001-5001msec 00:15:36.783 ----------------------------------------------------- 00:15:36.783 Suppressions used: 00:15:36.784 count bytes template 00:15:36.784 1 11 /usr/src/fio/parse.c 00:15:36.784 1 8 libtcmalloc_minimal.so 00:15:36.784 1 904 libcrypto.so 00:15:36.784 ----------------------------------------------------- 00:15:36.784 00:15:36.784 ************************************ 00:15:36.784 END TEST xnvme_fio_plugin 00:15:36.784 ************************************ 00:15:36.784 00:15:36.784 real 0m12.006s 00:15:36.784 user 0m8.316s 00:15:36.784 sys 0m3.087s 00:15:36.784 23:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.784 23:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:36.784 Process with pid 84433 is not found 00:15:36.784 23:00:15 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84433 00:15:36.784 23:00:15 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84433 ']' 00:15:36.784 23:00:15 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84433 00:15:36.784 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84433) - No such process 00:15:36.784 23:00:15 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84433 is not found' 00:15:36.784 00:15:36.784 real 2m58.528s 00:15:36.784 user 1m34.412s 00:15:36.784 sys 1m10.511s 00:15:36.784 23:00:15 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.784 ************************************ 00:15:36.784 END TEST nvme_xnvme 00:15:36.784 ************************************ 00:15:36.784 23:00:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.784 23:00:15 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:36.784 23:00:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:36.784 23:00:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.784 23:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.784 ************************************ 00:15:36.784 START TEST blockdev_xnvme 00:15:36.784 ************************************ 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:36.784 * Looking for test storage... 00:15:36.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.784 23:00:15 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:36.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.784 --rc genhtml_branch_coverage=1 00:15:36.784 --rc genhtml_function_coverage=1 00:15:36.784 --rc genhtml_legend=1 00:15:36.784 --rc geninfo_all_blocks=1 00:15:36.784 --rc geninfo_unexecuted_blocks=1 00:15:36.784 00:15:36.784 ' 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:36.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.784 --rc genhtml_branch_coverage=1 00:15:36.784 --rc genhtml_function_coverage=1 00:15:36.784 --rc genhtml_legend=1 00:15:36.784 --rc geninfo_all_blocks=1 00:15:36.784 --rc geninfo_unexecuted_blocks=1 00:15:36.784 00:15:36.784 ' 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:36.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.784 --rc genhtml_branch_coverage=1 00:15:36.784 --rc genhtml_function_coverage=1 00:15:36.784 --rc genhtml_legend=1 00:15:36.784 --rc geninfo_all_blocks=1 00:15:36.784 --rc geninfo_unexecuted_blocks=1 00:15:36.784 00:15:36.784 ' 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:36.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.784 --rc genhtml_branch_coverage=1 00:15:36.784 --rc genhtml_function_coverage=1 00:15:36.784 --rc genhtml_legend=1 00:15:36.784 --rc geninfo_all_blocks=1 00:15:36.784 --rc geninfo_unexecuted_blocks=1 00:15:36.784 00:15:36.784 ' 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85001 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:36.784 23:00:15 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85001 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 85001 ']' 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.784 23:00:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.784 [2024-11-26 23:00:15.406318] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:36.784 [2024-11-26 23:00:15.406631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85001 ] 00:15:36.784 [2024-11-26 23:00:15.544377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:36.784 [2024-11-26 23:00:15.574516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.784 [2024-11-26 23:00:15.603570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.361 23:00:16 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.361 23:00:16 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:37.361 23:00:16 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:37.361 23:00:16 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:37.361 23:00:16 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:37.361 23:00:16 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:37.361 23:00:16 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:37.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:38.198 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:38.198 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:38.198 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:38.198 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:38.198 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:38.198 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:38.473 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:38.473 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:38.474 nvme0n1 00:15:38.474 nvme0n2 00:15:38.474 nvme0n3 00:15:38.474 nvme1n1 00:15:38.474 nvme2n1 00:15:38.474 nvme3n1 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:38.474 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.474 23:00:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.475 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.475 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.475 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:38.475 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:38.475 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.475 23:00:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.475 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:38.475 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7f56e8c0-be06-4a0d-80d6-0a5a7033d0ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f56e8c0-be06-4a0d-80d6-0a5a7033d0ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "775067ff-af22-4775-b869-5bddf011ef05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "775067ff-af22-4775-b869-5bddf011ef05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "841e1ffd-2d77-4296-999e-227b26e2339c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "841e1ffd-2d77-4296-999e-227b26e2339c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "87c3c6da-bea0-473d-884b-6f87cac73976"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "87c3c6da-bea0-473d-884b-6f87cac73976",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "391434ad-9b6e-4e6c-8f7e-d3ee7e926961"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "391434ad-9b6e-4e6c-8f7e-d3ee7e926961",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a13cb3dd-490b-4dbb-9425-383e03b35d48"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a13cb3dd-490b-4dbb-9425-383e03b35d48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:38.476 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:38.476 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:38.476 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:38.476 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:38.476 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 85001 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 85001 ']' 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 85001 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85001 00:15:38.476 killing process with pid 85001 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85001' 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 85001 00:15:38.476 23:00:17 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 85001 00:15:39.052 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:39.052 23:00:17 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:39.052 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:39.052 23:00:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.052 23:00:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.052 ************************************ 00:15:39.052 START TEST bdev_hello_world 00:15:39.052 ************************************ 00:15:39.052 23:00:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:39.052 [2024-11-26 23:00:17.961359] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:39.052 [2024-11-26 23:00:17.961472] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85268 ] 00:15:39.052 [2024-11-26 23:00:18.094199] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:39.052 [2024-11-26 23:00:18.125094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.052 [2024-11-26 23:00:18.153684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.318 [2024-11-26 23:00:18.373465] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:39.318 [2024-11-26 23:00:18.373530] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:39.318 [2024-11-26 23:00:18.373553] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:39.318 [2024-11-26 23:00:18.375865] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:39.318 [2024-11-26 23:00:18.376492] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:39.318 [2024-11-26 23:00:18.376529] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:39.318 [2024-11-26 23:00:18.377650] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:39.318 00:15:39.318 [2024-11-26 23:00:18.377738] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:39.579 00:15:39.579 real 0m0.684s 00:15:39.579 user 0m0.351s 00:15:39.579 sys 0m0.188s 00:15:39.579 ************************************ 00:15:39.579 END TEST bdev_hello_world 00:15:39.579 ************************************ 00:15:39.579 23:00:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.580 23:00:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:39.580 23:00:18 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:39.580 23:00:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.580 23:00:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.580 23:00:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.580 ************************************ 00:15:39.580 START TEST bdev_bounds 00:15:39.580 ************************************ 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:39.580 Process bdevio pid: 85298 00:15:39.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85298 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85298' 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85298 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85298 ']' 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.580 23:00:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:39.841 [2024-11-26 23:00:18.724591] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:39.841 [2024-11-26 23:00:18.724735] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85298 ] 00:15:39.841 [2024-11-26 23:00:18.862408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:39.841 [2024-11-26 23:00:18.885260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.841 [2024-11-26 23:00:18.916757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.841 [2024-11-26 23:00:18.917346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.841 [2024-11-26 23:00:18.917364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.787 23:00:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.787 23:00:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:40.787 23:00:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:40.787 I/O targets: 00:15:40.787 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:40.787 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:40.787 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:40.787 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:40.787 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:40.787 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:40.787 00:15:40.787 00:15:40.787 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.787 http://cunit.sourceforge.net/ 00:15:40.787 00:15:40.787 00:15:40.787 Suite: bdevio tests on: nvme3n1 00:15:40.787 Test: blockdev write read block ...passed 00:15:40.787 Test: blockdev write zeroes read block ...passed 00:15:40.787 Test: blockdev write zeroes read no split ...passed 00:15:40.787 Test: blockdev write zeroes read split ...passed 00:15:40.788 Test: blockdev write zeroes read split partial ...passed 00:15:40.788 Test: blockdev reset ...passed 00:15:40.788 Test: blockdev write read 8 blocks ...passed 00:15:40.788 Test: blockdev write read size > 128k ...passed 00:15:40.788 Test: blockdev write read invalid size ...passed 00:15:40.788 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.788 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.788 Test: blockdev write read max offset ...passed 00:15:40.788 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.788 Test: blockdev writev readv 8 blocks ...passed 00:15:40.788 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.788 Test: blockdev writev readv block ...passed 00:15:40.788 Test: blockdev writev readv size > 128k ...passed 00:15:40.788 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.788 Test: blockdev comparev and writev ...passed 00:15:40.788 Test: blockdev nvme passthru rw ...passed 00:15:40.788 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.788 Test: blockdev nvme admin passthru ...passed 00:15:40.788 Test: blockdev copy ...passed 00:15:40.788 Suite: bdevio tests on: nvme2n1 00:15:40.788 Test: blockdev write read block ...passed 00:15:40.788 Test: blockdev write zeroes read block ...passed 00:15:40.788 Test: blockdev write zeroes read no split ...passed 00:15:40.788 Test: blockdev write zeroes read split ...passed 00:15:40.788 Test: blockdev write zeroes read split partial ...passed 00:15:40.788 Test: blockdev reset ...passed 00:15:40.788 Test: blockdev write read 8 blocks ...passed 00:15:40.788 Test: blockdev write read size > 128k ...passed 00:15:40.788 Test: blockdev write read invalid size ...passed 00:15:40.788 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.788 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.788 Test: blockdev write read max offset ...passed 00:15:40.788 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.788 Test: blockdev writev readv 8 blocks ...passed 00:15:40.788 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.788 Test: blockdev writev readv block ...passed 00:15:40.788 Test: blockdev writev readv size > 128k ...passed 00:15:40.788 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.788 Test: blockdev comparev and writev ...passed 00:15:40.788 Test: blockdev nvme passthru rw ...passed 00:15:40.788 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.788 Test: blockdev nvme admin passthru ...passed 00:15:40.788 Test: blockdev copy ...passed 00:15:40.788 Suite: bdevio tests on: nvme1n1 00:15:40.788 Test: blockdev write read block ...passed 00:15:40.788 Test: blockdev write zeroes read block ...passed 00:15:40.788 Test: blockdev write zeroes read no split ...passed 00:15:40.788 Test: blockdev write zeroes read split ...passed 00:15:40.788 Test: blockdev write zeroes read split partial ...passed 00:15:40.788 Test: blockdev reset ...passed 00:15:40.788 Test: blockdev write read 8 blocks ...passed 00:15:40.788 Test: blockdev write read size > 128k ...passed 00:15:40.788 Test: blockdev write read invalid size ...passed 00:15:40.788 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.788 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.789 Test: blockdev write read max offset ...passed 00:15:40.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.789 Test: blockdev writev readv 8 blocks ...passed 00:15:40.789 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.789 Test: blockdev writev readv block ...passed 00:15:40.789 Test: blockdev writev readv size > 128k ...passed 00:15:40.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.789 Test: blockdev comparev and writev ...passed 00:15:40.789 Test: blockdev nvme passthru rw ...passed 00:15:40.789 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.789 Test: blockdev nvme admin passthru ...passed 00:15:40.789 Test: blockdev copy ...passed 00:15:40.789 Suite: bdevio tests on: nvme0n3 00:15:40.789 Test: blockdev write read block ...passed 00:15:40.789 Test: blockdev write zeroes read block ...passed 00:15:40.789 Test: blockdev write zeroes read no split ...passed 00:15:40.789 Test: blockdev write zeroes read split ...passed 00:15:40.789 Test: blockdev write zeroes read split partial ...passed 00:15:40.789 Test: blockdev reset ...passed 00:15:40.789 Test: blockdev write read 8 blocks ...passed 00:15:40.789 Test: blockdev write read size > 128k ...passed 00:15:40.789 Test: blockdev write read invalid size ...passed 00:15:40.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.789 Test: blockdev write read max offset ...passed 00:15:40.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.789 Test: blockdev writev readv 8 blocks ...passed 00:15:40.789 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.789 Test: blockdev writev readv block ...passed 00:15:40.789 Test: blockdev writev readv size > 128k ...passed 00:15:40.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.789 Test: blockdev comparev and writev ...passed 00:15:40.789 Test: blockdev nvme passthru rw ...passed 00:15:40.789 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.789 Test: blockdev nvme admin passthru ...passed 00:15:40.789 Test: blockdev copy ...passed 00:15:40.789 Suite: bdevio tests on: nvme0n2 00:15:40.789 Test: blockdev write read block ...passed 00:15:40.789 Test: blockdev write zeroes read block ...passed 00:15:40.789 Test: blockdev write zeroes read no split ...passed 00:15:40.789 Test: blockdev write zeroes read split ...passed 00:15:40.789 Test: blockdev write zeroes read split partial ...passed 00:15:40.789 Test: blockdev reset ...passed 00:15:40.789 Test: blockdev write read 8 blocks ...passed 00:15:40.789 Test: blockdev write read size > 128k ...passed 00:15:40.789 Test: blockdev write read invalid size ...passed 00:15:40.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.789 Test: blockdev write read max offset ...passed 00:15:41.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:41.050 Test: blockdev writev readv 8 blocks ...passed 00:15:41.050 Test: blockdev writev readv 30 x 1block ...passed 00:15:41.050 Test: blockdev writev readv block ...passed 00:15:41.050 Test: blockdev writev readv size > 128k ...passed 00:15:41.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:41.050 Test: blockdev comparev and writev ...passed 00:15:41.050 Test: blockdev nvme passthru rw ...passed 00:15:41.050 Test: blockdev nvme passthru vendor specific ...passed 00:15:41.050 Test: blockdev nvme admin passthru ...passed 00:15:41.050 Test: blockdev copy ...passed 00:15:41.050 Suite: bdevio tests on: nvme0n1 00:15:41.050 Test: blockdev write read block ...passed 00:15:41.050 Test: blockdev write zeroes read block ...passed 00:15:41.050 Test: blockdev write zeroes read no split ...passed 00:15:41.050 Test: blockdev write zeroes read split ...passed 00:15:41.050 Test: blockdev write zeroes read split partial ...passed 00:15:41.050 Test: blockdev reset ...passed 00:15:41.050 Test: blockdev write read 8 blocks ...passed 00:15:41.050 Test: blockdev write read size > 128k ...passed 00:15:41.050 Test: blockdev write read invalid size ...passed 00:15:41.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:41.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:41.050 Test: blockdev write read max offset ...passed 00:15:41.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:41.050 Test: blockdev writev readv 8 blocks ...passed 00:15:41.050 Test: blockdev writev readv 30 x 1block ...passed 00:15:41.050 Test: blockdev writev readv block ...passed 00:15:41.050 Test: blockdev writev readv size > 128k ...passed 00:15:41.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:41.050 Test: blockdev comparev and writev ...passed 00:15:41.050 Test: blockdev nvme passthru rw ...passed 00:15:41.050 Test: blockdev nvme passthru vendor specific ...passed 00:15:41.050 Test: blockdev nvme admin passthru ...passed 00:15:41.050 Test: blockdev copy ...passed 00:15:41.050 00:15:41.050 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.050 suites 6 6 n/a 0 0 00:15:41.050 tests 138 138 138 0 0 00:15:41.050 asserts 780 780 780 0 n/a 00:15:41.050 00:15:41.050 Elapsed time = 0.638 seconds 00:15:41.050 0 00:15:41.050 23:00:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85298 00:15:41.050 23:00:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85298 ']' 00:15:41.050 23:00:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85298 00:15:41.051 23:00:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:41.051 23:00:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.051 23:00:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85298 00:15:41.051 23:00:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.051 23:00:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.051 23:00:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85298' 00:15:41.051 killing process with pid 85298 00:15:41.051 23:00:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85298 00:15:41.051 23:00:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85298 00:15:41.312 23:00:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:41.312 00:15:41.312 real 0m1.571s 00:15:41.312 user 0m3.827s 00:15:41.312 sys 0m0.346s 00:15:41.312 23:00:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.312 ************************************ 00:15:41.312 23:00:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:41.312 END TEST bdev_bounds 00:15:41.312 ************************************ 00:15:41.312 23:00:20 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:41.312 23:00:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:41.312 23:00:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.312 23:00:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.312 ************************************ 00:15:41.312 START TEST bdev_nbd 00:15:41.312 ************************************ 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85350 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85350 /var/tmp/spdk-nbd.sock 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85350 ']' 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:41.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.312 23:00:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:41.312 [2024-11-26 23:00:20.374413] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:15:41.312 [2024-11-26 23:00:20.374562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.574 [2024-11-26 23:00:20.511567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:41.574 [2024-11-26 23:00:20.534892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.574 [2024-11-26 23:00:20.564575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.147 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:42.148 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.410 1+0 records in 00:15:42.410 1+0 records out 00:15:42.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112041 s, 3.7 MB/s 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:42.410 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.671 1+0 records in 00:15:42.671 1+0 records out 00:15:42.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118078 s, 3.5 MB/s 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:42.671 23:00:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.933 1+0 records in 00:15:42.933 1+0 records out 00:15:42.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011245 s, 3.6 MB/s 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:42.933 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.196 1+0 records in 00:15:43.196 1+0 records out 00:15:43.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000905835 s, 4.5 MB/s 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.196 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.458 1+0 records in 00:15:43.458 1+0 records out 00:15:43.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136298 s, 3.0 MB/s 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.458 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.720 1+0 records in 00:15:43.720 1+0 records out 00:15:43.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107898 s, 3.8 MB/s 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.720 23:00:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd0", 00:15:43.982 "bdev_name": "nvme0n1" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd1", 00:15:43.982 "bdev_name": "nvme0n2" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd2", 00:15:43.982 "bdev_name": "nvme0n3" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd3", 00:15:43.982 "bdev_name": "nvme1n1" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd4", 00:15:43.982 "bdev_name": "nvme2n1" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd5", 00:15:43.982 "bdev_name": "nvme3n1" 00:15:43.982 } 00:15:43.982 ]' 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd0", 00:15:43.982 "bdev_name": "nvme0n1" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd1", 00:15:43.982 "bdev_name": "nvme0n2" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd2", 00:15:43.982 "bdev_name": "nvme0n3" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd3", 00:15:43.982 "bdev_name": "nvme1n1" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd4", 00:15:43.982 "bdev_name": "nvme2n1" 00:15:43.982 }, 00:15:43.982 { 00:15:43.982 "nbd_device": "/dev/nbd5", 00:15:43.982 "bdev_name": "nvme3n1" 00:15:43.982 } 00:15:43.982 ]' 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.982 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.244 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.505 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.767 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.035 23:00:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.296 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:45.558 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:45.820 /dev/nbd0 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.820 1+0 records in 00:15:45.820 1+0 records out 00:15:45.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532639 s, 7.7 MB/s 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:45.820 23:00:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:46.081 /dev/nbd1 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.081 1+0 records in 00:15:46.081 1+0 records out 00:15:46.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812844 s, 5.0 MB/s 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:46.081 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:46.342 /dev/nbd10 00:15:46.342 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:46.342 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:46.342 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:46.342 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:46.342 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.343 1+0 records in 00:15:46.343 1+0 records out 00:15:46.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581498 s, 7.0 MB/s 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:46.343 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:46.604 /dev/nbd11 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.604 1+0 records in 00:15:46.604 1+0 records out 00:15:46.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946274 s, 4.3 MB/s 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:46.604 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:46.863 /dev/nbd12 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.863 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.864 1+0 records in 00:15:46.864 1+0 records out 00:15:46.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115648 s, 3.5 MB/s 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:46.864 23:00:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:47.123 /dev/nbd13 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.123 1+0 records in 00:15:47.123 1+0 records out 00:15:47.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772803 s, 5.3 MB/s 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.123 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd0", 00:15:47.385 "bdev_name": "nvme0n1" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd1", 00:15:47.385 "bdev_name": "nvme0n2" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd10", 00:15:47.385 "bdev_name": "nvme0n3" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd11", 00:15:47.385 "bdev_name": "nvme1n1" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd12", 00:15:47.385 "bdev_name": "nvme2n1" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd13", 00:15:47.385 "bdev_name": "nvme3n1" 00:15:47.385 } 00:15:47.385 ]' 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd0", 00:15:47.385 "bdev_name": "nvme0n1" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd1", 00:15:47.385 "bdev_name": "nvme0n2" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd10", 00:15:47.385 "bdev_name": "nvme0n3" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd11", 00:15:47.385 "bdev_name": "nvme1n1" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd12", 00:15:47.385 "bdev_name": "nvme2n1" 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "nbd_device": "/dev/nbd13", 00:15:47.385 "bdev_name": "nvme3n1" 00:15:47.385 } 00:15:47.385 ]' 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:47.385 /dev/nbd1 00:15:47.385 /dev/nbd10 00:15:47.385 /dev/nbd11 00:15:47.385 /dev/nbd12 00:15:47.385 /dev/nbd13' 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:47.385 /dev/nbd1 00:15:47.385 /dev/nbd10 00:15:47.385 /dev/nbd11 00:15:47.385 /dev/nbd12 00:15:47.385 /dev/nbd13' 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:47.385 256+0 records in 00:15:47.385 256+0 records out 00:15:47.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00922162 s, 114 MB/s 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.385 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:47.647 256+0 records in 00:15:47.647 256+0 records out 00:15:47.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178554 s, 5.9 MB/s 00:15:47.647 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.647 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:47.647 256+0 records in 00:15:47.647 256+0 records out 00:15:47.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198414 s, 5.3 MB/s 00:15:47.647 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.647 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:47.909 256+0 records in 00:15:47.909 256+0 records out 00:15:47.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231653 s, 4.5 MB/s 00:15:47.909 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.909 23:00:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:48.171 256+0 records in 00:15:48.171 256+0 records out 00:15:48.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235211 s, 4.5 MB/s 00:15:48.171 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.171 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:48.432 256+0 records in 00:15:48.432 256+0 records out 00:15:48.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247713 s, 4.2 MB/s 00:15:48.432 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.432 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:48.692 256+0 records in 00:15:48.692 256+0 records out 00:15:48.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173643 s, 6.0 MB/s 00:15:48.692 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:48.692 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.693 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.954 23:00:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.216 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.477 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.737 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.998 23:00:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:50.260 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:50.521 malloc_lvol_verify 00:15:50.521 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:50.521 f2e5ee11-1eb9-439f-a7be-8d775bc34013 00:15:50.521 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:50.781 f5d1fd70-370b-4c1b-93a3-785e8e9b5f6d 00:15:50.781 23:00:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:51.041 /dev/nbd0 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:51.042 mke2fs 1.47.0 (5-Feb-2023) 00:15:51.042 Discarding device blocks: 0/4096 done 00:15:51.042 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:51.042 00:15:51.042 Allocating group tables: 0/1 done 00:15:51.042 Writing inode tables: 0/1 done 00:15:51.042 Creating journal (1024 blocks): done 00:15:51.042 Writing superblocks and filesystem accounting information: 0/1 done 00:15:51.042 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.042 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85350 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85350 ']' 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85350 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85350 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.303 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.303 killing process with pid 85350 00:15:51.304 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85350' 00:15:51.304 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85350 00:15:51.304 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85350 00:15:51.304 23:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:51.304 00:15:51.304 real 0m10.110s 00:15:51.304 user 0m13.963s 00:15:51.304 sys 0m3.581s 00:15:51.304 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.304 ************************************ 00:15:51.304 END TEST bdev_nbd 00:15:51.304 ************************************ 00:15:51.304 23:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:51.566 23:00:30 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:51.566 23:00:30 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:51.566 23:00:30 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:51.566 23:00:30 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:51.566 23:00:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.566 23:00:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.566 23:00:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.566 ************************************ 00:15:51.566 START TEST bdev_fio 00:15:51.566 ************************************ 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:51.566 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:51.566 ************************************ 00:15:51.566 START TEST bdev_fio_rw_verify 00:15:51.566 ************************************ 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:51.566 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.567 23:00:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:51.827 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:51.827 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:51.827 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:51.827 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:51.827 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:51.827 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:51.827 fio-3.35 00:15:51.827 Starting 6 threads 00:16:04.085 00:16:04.085 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85745: Tue Nov 26 23:00:41 2024 00:16:04.085 read: IOPS=15.7k, BW=61.5MiB/s (64.5MB/s)(615MiB/10004msec) 00:16:04.085 slat (usec): min=2, max=3596, avg= 6.55, stdev=16.78 00:16:04.085 clat (usec): min=79, max=6910, avg=1179.88, stdev=690.83 00:16:04.085 lat (usec): min=83, max=6923, avg=1186.43, stdev=691.81 00:16:04.085 clat percentiles (usec): 00:16:04.085 | 50.000th=[ 1090], 99.000th=[ 3326], 99.900th=[ 4490], 99.990th=[ 5473], 00:16:04.085 | 99.999th=[ 6521] 00:16:04.085 write: IOPS=15.9k, BW=62.3MiB/s (65.3MB/s)(623MiB/10004msec); 0 zone resets 00:16:04.085 slat (usec): min=10, max=3134, avg=37.68, stdev=120.48 00:16:04.085 clat (usec): min=83, max=12294, avg=1550.28, stdev=938.43 00:16:04.085 lat (usec): min=97, max=12318, avg=1587.97, stdev=946.91 00:16:04.085 clat percentiles (usec): 00:16:04.085 | 50.000th=[ 1369], 99.000th=[ 5080], 99.900th=[ 7046], 99.990th=[ 9110], 00:16:04.085 | 99.999th=[12256] 00:16:04.085 bw ( KiB/s): min=42898, max=87093, per=99.73%, avg=63601.68, stdev=2142.72, samples=114 00:16:04.085 iops : min=10722, max=21773, avg=15899.26, stdev=535.72, samples=114 00:16:04.085 lat (usec) : 100=0.01%, 250=2.57%, 500=8.49%, 750=11.66%, 1000=14.15% 00:16:04.085 lat (msec) : 2=46.06%, 4=15.66%, 10=1.40%, 20=0.01% 00:16:04.085 cpu : usr=44.52%, sys=32.56%, ctx=5528, majf=0, minf=15562 00:16:04.085 IO depths : 1=10.8%, 2=23.0%, 4=51.4%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.085 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.085 issued rwts: total=157549,159502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.085 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:04.085 00:16:04.085 Run status group 0 (all jobs): 00:16:04.085 READ: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=615MiB (645MB), run=10004-10004msec 00:16:04.085 WRITE: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=623MiB (653MB), run=10004-10004msec 00:16:04.085 ----------------------------------------------------- 00:16:04.085 Suppressions used: 00:16:04.085 count bytes template 00:16:04.085 6 48 /usr/src/fio/parse.c 00:16:04.085 1852 177792 /usr/src/fio/iolog.c 00:16:04.085 1 8 libtcmalloc_minimal.so 00:16:04.085 1 904 libcrypto.so 00:16:04.085 ----------------------------------------------------- 00:16:04.086 00:16:04.086 ************************************ 00:16:04.086 END TEST bdev_fio_rw_verify 00:16:04.086 ************************************ 00:16:04.086 00:16:04.086 real 0m11.198s 00:16:04.086 user 0m27.407s 00:16:04.086 sys 0m19.893s 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:04.086 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7f56e8c0-be06-4a0d-80d6-0a5a7033d0ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f56e8c0-be06-4a0d-80d6-0a5a7033d0ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "775067ff-af22-4775-b869-5bddf011ef05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "775067ff-af22-4775-b869-5bddf011ef05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "841e1ffd-2d77-4296-999e-227b26e2339c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "841e1ffd-2d77-4296-999e-227b26e2339c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "87c3c6da-bea0-473d-884b-6f87cac73976"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "87c3c6da-bea0-473d-884b-6f87cac73976",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "391434ad-9b6e-4e6c-8f7e-d3ee7e926961"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "391434ad-9b6e-4e6c-8f7e-d3ee7e926961",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a13cb3dd-490b-4dbb-9425-383e03b35d48"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a13cb3dd-490b-4dbb-9425-383e03b35d48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.087 /home/vagrant/spdk_repo/spdk 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:04.087 00:16:04.087 real 0m11.387s 00:16:04.087 user 0m27.488s 00:16:04.087 sys 0m19.974s 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.087 ************************************ 00:16:04.087 END TEST bdev_fio 00:16:04.087 ************************************ 00:16:04.087 23:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:04.087 23:00:41 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:04.087 23:00:41 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:04.087 23:00:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:04.087 23:00:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.087 23:00:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.087 ************************************ 00:16:04.087 START TEST bdev_verify 00:16:04.087 ************************************ 00:16:04.087 23:00:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:04.087 [2024-11-26 23:00:42.002057] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:04.087 [2024-11-26 23:00:42.002189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85918 ] 00:16:04.087 [2024-11-26 23:00:42.139118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:04.087 [2024-11-26 23:00:42.169584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:04.087 [2024-11-26 23:00:42.199678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.087 [2024-11-26 23:00:42.199733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.087 Running I/O for 5 seconds... 00:16:05.611 24512.00 IOPS, 95.75 MiB/s [2024-11-26T23:00:45.681Z] 23792.00 IOPS, 92.94 MiB/s [2024-11-26T23:00:47.065Z] 23456.00 IOPS, 91.62 MiB/s [2024-11-26T23:00:47.654Z] 23768.00 IOPS, 92.84 MiB/s [2024-11-26T23:00:47.654Z] 23586.40 IOPS, 92.13 MiB/s 00:16:08.528 Latency(us) 00:16:08.528 [2024-11-26T23:00:47.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.528 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x0 length 0x80000 00:16:08.528 nvme0n1 : 5.06 1922.02 7.51 0.00 0.00 66466.16 4537.11 64931.05 00:16:08.528 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x80000 length 0x80000 00:16:08.528 nvme0n1 : 5.04 1879.26 7.34 0.00 0.00 67971.30 7561.85 68964.04 00:16:08.528 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x0 length 0x80000 00:16:08.528 nvme0n2 : 5.11 1904.68 7.44 0.00 0.00 66872.07 5016.02 69367.34 00:16:08.528 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x80000 length 0x80000 00:16:08.528 nvme0n2 : 5.06 1870.20 7.31 0.00 0.00 68159.14 5167.26 73803.62 00:16:08.528 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x0 length 0x80000 00:16:08.528 nvme0n3 : 5.10 1906.25 7.45 0.00 0.00 66744.09 6024.27 79046.50 00:16:08.528 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x80000 length 0x80000 00:16:08.528 nvme0n3 : 5.04 1853.02 7.24 0.00 0.00 68648.91 6200.71 75013.51 00:16:08.528 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x0 length 0x20000 00:16:08.528 nvme1n1 : 5.11 1903.88 7.44 0.00 0.00 66693.49 4209.43 123409.33 00:16:08.528 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x20000 length 0x20000 00:16:08.528 nvme1n1 : 5.10 1830.89 7.15 0.00 0.00 69349.22 11090.71 81466.29 00:16:08.528 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x0 length 0xbd0bd 00:16:08.528 nvme2n1 : 5.11 2318.71 9.06 0.00 0.00 54643.28 4965.61 60091.47 00:16:08.528 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:08.528 nvme2n1 : 5.13 2312.40 9.03 0.00 0.00 54605.98 226.86 81062.99 00:16:08.528 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0x0 length 0xa0000 00:16:08.528 nvme3n1 : 5.11 1903.34 7.43 0.00 0.00 66494.66 3982.57 104051.00 00:16:08.528 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:08.528 Verification LBA range: start 0xa0000 length 0xa0000 00:16:08.528 nvme3n1 : 5.10 1781.30 6.96 0.00 0.00 70877.99 9175.04 119376.34 00:16:08.528 [2024-11-26T23:00:47.655Z] =================================================================================================================== 00:16:08.528 [2024-11-26T23:00:47.655Z] Total : 23385.96 91.35 0.00 0.00 65173.55 226.86 123409.33 00:16:08.789 ************************************ 00:16:08.789 END TEST bdev_verify 00:16:08.789 ************************************ 00:16:08.789 00:16:08.789 real 0m5.802s 00:16:08.789 user 0m9.465s 00:16:08.789 sys 0m1.326s 00:16:08.789 23:00:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.789 23:00:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:08.789 23:00:47 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:08.789 23:00:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:08.789 23:00:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.789 23:00:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.789 ************************************ 00:16:08.789 START TEST bdev_verify_big_io 00:16:08.789 ************************************ 00:16:08.789 23:00:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:08.789 [2024-11-26 23:00:47.856652] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:08.789 [2024-11-26 23:00:47.856766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86009 ] 00:16:09.050 [2024-11-26 23:00:47.990521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:09.050 [2024-11-26 23:00:48.013654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.050 [2024-11-26 23:00:48.034913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.050 [2024-11-26 23:00:48.034947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.312 Running I/O for 5 seconds... 00:16:15.227 2144.00 IOPS, 134.00 MiB/s [2024-11-26T23:00:54.354Z] 3406.00 IOPS, 212.88 MiB/s 00:16:15.227 Latency(us) 00:16:15.227 [2024-11-26T23:00:54.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.227 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x0 length 0x8000 00:16:15.227 nvme0n1 : 5.68 132.42 8.28 0.00 0.00 953543.17 9628.75 1206669.00 00:16:15.227 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x8000 length 0x8000 00:16:15.227 nvme0n1 : 5.86 141.93 8.87 0.00 0.00 855172.09 45371.08 1148594.02 00:16:15.227 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x0 length 0x8000 00:16:15.227 nvme0n2 : 5.68 98.58 6.16 0.00 0.00 1222252.52 79853.10 2645637.91 00:16:15.227 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x8000 length 0x8000 00:16:15.227 nvme0n2 : 5.86 106.42 6.65 0.00 0.00 1091491.09 136314.88 1432516.14 00:16:15.227 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x0 length 0x8000 00:16:15.227 nvme0n3 : 5.74 108.70 6.79 0.00 0.00 1092364.74 47185.92 1897115.96 00:16:15.227 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x8000 length 0x8000 00:16:15.227 nvme0n3 : 5.89 119.42 7.46 0.00 0.00 969033.40 158093.00 1264743.98 00:16:15.227 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x0 length 0x2000 00:16:15.227 nvme1n1 : 5.75 106.82 6.68 0.00 0.00 1076410.82 64124.46 2981182.23 00:16:15.227 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x2000 length 0x2000 00:16:15.227 nvme1n1 : 5.90 138.28 8.64 0.00 0.00 815945.66 4411.08 1058255.16 00:16:15.227 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.227 Verification LBA range: start 0x0 length 0xbd0b 00:16:15.228 nvme2n1 : 5.75 144.81 9.05 0.00 0.00 767779.66 9326.28 825955.25 00:16:15.228 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.228 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:15.228 nvme2n1 : 5.93 156.49 9.78 0.00 0.00 697425.05 11746.07 1348630.06 00:16:15.228 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.228 Verification LBA range: start 0x0 length 0xa000 00:16:15.228 nvme3n1 : 5.87 166.19 10.39 0.00 0.00 652801.81 392.27 1226027.32 00:16:15.228 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.228 Verification LBA range: start 0xa000 length 0xa000 00:16:15.228 nvme3n1 : 5.94 156.29 9.77 0.00 0.00 682607.08 576.59 790464.98 00:16:15.228 [2024-11-26T23:00:54.355Z] =================================================================================================================== 00:16:15.228 [2024-11-26T23:00:54.355Z] Total : 1576.35 98.52 0.00 0.00 875003.22 392.27 2981182.23 00:16:15.228 00:16:15.228 real 0m6.544s 00:16:15.228 user 0m12.209s 00:16:15.228 sys 0m0.336s 00:16:15.228 ************************************ 00:16:15.228 END TEST bdev_verify_big_io 00:16:15.228 ************************************ 00:16:15.228 23:00:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.228 23:00:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.489 23:00:54 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:15.489 23:00:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:15.489 23:00:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.489 23:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.489 ************************************ 00:16:15.489 START TEST bdev_write_zeroes 00:16:15.489 ************************************ 00:16:15.489 23:00:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:15.489 [2024-11-26 23:00:54.460951] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:15.489 [2024-11-26 23:00:54.461063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86102 ] 00:16:15.489 [2024-11-26 23:00:54.593994] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:15.750 [2024-11-26 23:00:54.621335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.750 [2024-11-26 23:00:54.640067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.750 Running I/O for 1 seconds... 00:16:17.135 81436.00 IOPS, 318.11 MiB/s 00:16:17.135 Latency(us) 00:16:17.135 [2024-11-26T23:00:56.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.135 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:17.135 nvme0n1 : 1.01 13391.91 52.31 0.00 0.00 9549.35 7007.31 26214.40 00:16:17.135 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:17.135 nvme0n2 : 1.01 13376.32 52.25 0.00 0.00 9555.14 7057.72 24903.68 00:16:17.135 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:17.135 nvme0n3 : 1.02 13360.64 52.19 0.00 0.00 9559.24 7057.72 23794.61 00:16:17.135 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:17.135 nvme1n1 : 1.02 13345.67 52.13 0.00 0.00 9564.61 7057.72 22584.71 00:16:17.135 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:17.135 nvme2n1 : 1.02 14182.75 55.40 0.00 0.00 8994.44 3957.37 18249.26 00:16:17.135 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:17.135 nvme3n1 : 1.02 13329.12 52.07 0.00 0.00 9513.44 6604.01 23492.14 00:16:17.135 [2024-11-26T23:00:56.262Z] =================================================================================================================== 00:16:17.135 [2024-11-26T23:00:56.262Z] Total : 80986.41 316.35 0.00 0.00 9450.85 3957.37 26214.40 00:16:17.135 00:16:17.135 real 0m1.635s 00:16:17.135 user 0m1.041s 00:16:17.135 sys 0m0.421s 00:16:17.135 ************************************ 00:16:17.135 END TEST bdev_write_zeroes 00:16:17.135 ************************************ 00:16:17.135 23:00:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.135 23:00:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:17.135 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.135 23:00:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:17.135 23:00:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.135 23:00:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.135 ************************************ 00:16:17.135 START TEST bdev_json_nonenclosed 00:16:17.135 ************************************ 00:16:17.135 23:00:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.135 [2024-11-26 23:00:56.162235] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:17.135 [2024-11-26 23:00:56.162374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86139 ] 00:16:17.396 [2024-11-26 23:00:56.295171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:17.396 [2024-11-26 23:00:56.325835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.396 [2024-11-26 23:00:56.343946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.396 [2024-11-26 23:00:56.344025] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:17.396 [2024-11-26 23:00:56.344041] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:17.396 [2024-11-26 23:00:56.344056] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:17.396 00:16:17.396 real 0m0.314s 00:16:17.396 user 0m0.113s 00:16:17.396 sys 0m0.098s 00:16:17.396 ************************************ 00:16:17.396 END TEST bdev_json_nonenclosed 00:16:17.396 ************************************ 00:16:17.396 23:00:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.396 23:00:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:17.396 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.396 23:00:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:17.396 23:00:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.396 23:00:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.396 ************************************ 00:16:17.396 START TEST bdev_json_nonarray 00:16:17.396 ************************************ 00:16:17.396 23:00:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.657 [2024-11-26 23:00:56.549416] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:17.657 [2024-11-26 23:00:56.549792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86164 ] 00:16:17.657 [2024-11-26 23:00:56.689151] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:17.657 [2024-11-26 23:00:56.721378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.657 [2024-11-26 23:00:56.748581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.657 [2024-11-26 23:00:56.748702] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:17.657 [2024-11-26 23:00:56.748722] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:17.657 [2024-11-26 23:00:56.748733] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:17.918 00:16:17.918 real 0m0.356s 00:16:17.918 user 0m0.128s 00:16:17.918 sys 0m0.121s 00:16:17.918 23:00:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.918 ************************************ 00:16:17.918 END TEST bdev_json_nonarray 00:16:17.918 ************************************ 00:16:17.918 23:00:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:17.918 23:00:56 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:18.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:19.062 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:19.062 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.007 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.007 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.007 00:16:20.007 real 0m43.717s 00:16:20.007 user 1m12.266s 00:16:20.007 sys 0m30.996s 00:16:20.007 23:00:58 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.007 ************************************ 00:16:20.007 END TEST blockdev_xnvme 00:16:20.007 ************************************ 00:16:20.007 23:00:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.007 23:00:58 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:20.007 23:00:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.007 23:00:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.007 23:00:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.007 ************************************ 00:16:20.007 START TEST ublk 00:16:20.007 ************************************ 00:16:20.007 23:00:58 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:20.007 * Looking for test storage... 00:16:20.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:20.007 23:00:59 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.007 23:00:59 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.007 23:00:59 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.007 23:00:59 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.007 23:00:59 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.007 23:00:59 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.008 23:00:59 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.008 23:00:59 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.008 23:00:59 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.008 23:00:59 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.008 23:00:59 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.008 23:00:59 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.008 23:00:59 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.008 23:00:59 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.008 23:00:59 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.008 23:00:59 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:20.008 23:00:59 ublk -- scripts/common.sh@345 -- # : 1 00:16:20.008 23:00:59 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.008 23:00:59 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.008 23:00:59 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:20.008 23:00:59 ublk -- scripts/common.sh@353 -- # local d=1 00:16:20.008 23:00:59 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.008 23:00:59 ublk -- scripts/common.sh@355 -- # echo 1 00:16:20.008 23:00:59 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.008 23:00:59 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:20.008 23:00:59 ublk -- scripts/common.sh@353 -- # local d=2 00:16:20.008 23:00:59 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.008 23:00:59 ublk -- scripts/common.sh@355 -- # echo 2 00:16:20.008 23:00:59 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.008 23:00:59 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.008 23:00:59 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.008 23:00:59 ublk -- scripts/common.sh@368 -- # return 0 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.008 --rc genhtml_branch_coverage=1 00:16:20.008 --rc genhtml_function_coverage=1 00:16:20.008 --rc genhtml_legend=1 00:16:20.008 --rc geninfo_all_blocks=1 00:16:20.008 --rc geninfo_unexecuted_blocks=1 00:16:20.008 00:16:20.008 ' 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.008 --rc genhtml_branch_coverage=1 00:16:20.008 --rc genhtml_function_coverage=1 00:16:20.008 --rc genhtml_legend=1 00:16:20.008 --rc geninfo_all_blocks=1 00:16:20.008 --rc geninfo_unexecuted_blocks=1 00:16:20.008 00:16:20.008 ' 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.008 --rc genhtml_branch_coverage=1 00:16:20.008 --rc genhtml_function_coverage=1 00:16:20.008 --rc genhtml_legend=1 00:16:20.008 --rc geninfo_all_blocks=1 00:16:20.008 --rc geninfo_unexecuted_blocks=1 00:16:20.008 00:16:20.008 ' 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.008 --rc genhtml_branch_coverage=1 00:16:20.008 --rc genhtml_function_coverage=1 00:16:20.008 --rc genhtml_legend=1 00:16:20.008 --rc geninfo_all_blocks=1 00:16:20.008 --rc geninfo_unexecuted_blocks=1 00:16:20.008 00:16:20.008 ' 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:20.008 23:00:59 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:20.008 23:00:59 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:20.008 23:00:59 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:20.008 23:00:59 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:20.008 23:00:59 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:20.008 23:00:59 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:20.008 23:00:59 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:20.008 23:00:59 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:20.008 23:00:59 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.008 23:00:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.008 ************************************ 00:16:20.008 START TEST test_save_ublk_config 00:16:20.008 ************************************ 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:20.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86449 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86449 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86449 ']' 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.008 23:00:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:20.268 [2024-11-26 23:00:59.218444] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:20.268 [2024-11-26 23:00:59.218830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86449 ] 00:16:20.268 [2024-11-26 23:00:59.356766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:20.268 [2024-11-26 23:00:59.383365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.528 [2024-11-26 23:00:59.412431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.101 [2024-11-26 23:01:00.076322] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.101 [2024-11-26 23:01:00.077475] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.101 malloc0 00:16:21.101 [2024-11-26 23:01:00.108464] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:21.101 [2024-11-26 23:01:00.108562] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:21.101 [2024-11-26 23:01:00.108579] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:21.101 [2024-11-26 23:01:00.108587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:21.101 [2024-11-26 23:01:00.117431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:21.101 [2024-11-26 23:01:00.117457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:21.101 [2024-11-26 23:01:00.122326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:21.101 [2024-11-26 23:01:00.122455] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:21.101 [2024-11-26 23:01:00.141327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:21.101 0 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.101 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.363 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.363 23:01:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:21.363 "subsystems": [ 00:16:21.363 { 00:16:21.363 "subsystem": "fsdev", 00:16:21.363 "config": [ 00:16:21.363 { 00:16:21.363 "method": "fsdev_set_opts", 00:16:21.363 "params": { 00:16:21.363 "fsdev_io_pool_size": 65535, 00:16:21.363 "fsdev_io_cache_size": 256 00:16:21.363 } 00:16:21.363 } 00:16:21.363 ] 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "subsystem": "keyring", 00:16:21.363 "config": [] 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "subsystem": "iobuf", 00:16:21.363 "config": [ 00:16:21.363 { 00:16:21.363 "method": "iobuf_set_options", 00:16:21.363 "params": { 00:16:21.363 "small_pool_count": 8192, 00:16:21.363 "large_pool_count": 1024, 00:16:21.363 "small_bufsize": 8192, 00:16:21.363 "large_bufsize": 135168, 00:16:21.363 "enable_numa": false 00:16:21.363 } 00:16:21.363 } 00:16:21.363 ] 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "subsystem": "sock", 00:16:21.363 "config": [ 00:16:21.363 { 00:16:21.363 "method": "sock_set_default_impl", 00:16:21.363 "params": { 00:16:21.363 "impl_name": "posix" 00:16:21.363 } 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "method": "sock_impl_set_options", 00:16:21.363 "params": { 00:16:21.363 "impl_name": "ssl", 00:16:21.363 "recv_buf_size": 4096, 00:16:21.363 "send_buf_size": 4096, 00:16:21.363 "enable_recv_pipe": true, 00:16:21.363 "enable_quickack": false, 00:16:21.363 "enable_placement_id": 0, 00:16:21.363 "enable_zerocopy_send_server": true, 00:16:21.363 "enable_zerocopy_send_client": false, 00:16:21.363 "zerocopy_threshold": 0, 00:16:21.363 "tls_version": 0, 00:16:21.363 "enable_ktls": false 00:16:21.363 } 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "method": "sock_impl_set_options", 00:16:21.363 "params": { 00:16:21.363 "impl_name": "posix", 00:16:21.363 "recv_buf_size": 2097152, 00:16:21.363 "send_buf_size": 2097152, 00:16:21.363 "enable_recv_pipe": true, 00:16:21.363 "enable_quickack": false, 00:16:21.363 "enable_placement_id": 0, 00:16:21.363 "enable_zerocopy_send_server": true, 00:16:21.363 "enable_zerocopy_send_client": false, 00:16:21.363 "zerocopy_threshold": 0, 00:16:21.363 "tls_version": 0, 00:16:21.363 "enable_ktls": false 00:16:21.363 } 00:16:21.363 } 00:16:21.363 ] 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "subsystem": "vmd", 00:16:21.363 "config": [] 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "subsystem": "accel", 00:16:21.363 "config": [ 00:16:21.363 { 00:16:21.363 "method": "accel_set_options", 00:16:21.363 "params": { 00:16:21.363 "small_cache_size": 128, 00:16:21.363 "large_cache_size": 16, 00:16:21.363 "task_count": 2048, 00:16:21.363 "sequence_count": 2048, 00:16:21.363 "buf_count": 2048 00:16:21.363 } 00:16:21.363 } 00:16:21.363 ] 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "subsystem": "bdev", 00:16:21.363 "config": [ 00:16:21.363 { 00:16:21.363 "method": "bdev_set_options", 00:16:21.363 "params": { 00:16:21.363 "bdev_io_pool_size": 65535, 00:16:21.363 "bdev_io_cache_size": 256, 00:16:21.363 "bdev_auto_examine": true, 00:16:21.363 "iobuf_small_cache_size": 128, 00:16:21.363 "iobuf_large_cache_size": 16 00:16:21.363 } 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "method": "bdev_raid_set_options", 00:16:21.363 "params": { 00:16:21.363 "process_window_size_kb": 1024, 00:16:21.363 "process_max_bandwidth_mb_sec": 0 00:16:21.363 } 00:16:21.363 }, 00:16:21.363 { 00:16:21.363 "method": "bdev_iscsi_set_options", 00:16:21.363 "params": { 00:16:21.363 "timeout_sec": 30 00:16:21.363 } 00:16:21.363 }, 00:16:21.363 { 00:16:21.364 "method": "bdev_nvme_set_options", 00:16:21.364 "params": { 00:16:21.364 "action_on_timeout": "none", 00:16:21.364 "timeout_us": 0, 00:16:21.364 "timeout_admin_us": 0, 00:16:21.364 "keep_alive_timeout_ms": 10000, 00:16:21.364 "arbitration_burst": 0, 00:16:21.364 "low_priority_weight": 0, 00:16:21.364 "medium_priority_weight": 0, 00:16:21.364 "high_priority_weight": 0, 00:16:21.364 "nvme_adminq_poll_period_us": 10000, 00:16:21.364 "nvme_ioq_poll_period_us": 0, 00:16:21.364 "io_queue_requests": 0, 00:16:21.364 "delay_cmd_submit": true, 00:16:21.364 "transport_retry_count": 4, 00:16:21.364 "bdev_retry_count": 3, 00:16:21.364 "transport_ack_timeout": 0, 00:16:21.364 "ctrlr_loss_timeout_sec": 0, 00:16:21.364 "reconnect_delay_sec": 0, 00:16:21.364 "fast_io_fail_timeout_sec": 0, 00:16:21.364 "disable_auto_failback": false, 00:16:21.364 "generate_uuids": false, 00:16:21.364 "transport_tos": 0, 00:16:21.364 "nvme_error_stat": false, 00:16:21.364 "rdma_srq_size": 0, 00:16:21.364 "io_path_stat": false, 00:16:21.364 "allow_accel_sequence": false, 00:16:21.364 "rdma_max_cq_size": 0, 00:16:21.364 "rdma_cm_event_timeout_ms": 0, 00:16:21.364 "dhchap_digests": [ 00:16:21.364 "sha256", 00:16:21.364 "sha384", 00:16:21.364 "sha512" 00:16:21.364 ], 00:16:21.364 "dhchap_dhgroups": [ 00:16:21.364 "null", 00:16:21.364 "ffdhe2048", 00:16:21.364 "ffdhe3072", 00:16:21.364 "ffdhe4096", 00:16:21.364 "ffdhe6144", 00:16:21.364 "ffdhe8192" 00:16:21.364 ] 00:16:21.364 } 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "method": "bdev_nvme_set_hotplug", 00:16:21.364 "params": { 00:16:21.364 "period_us": 100000, 00:16:21.364 "enable": false 00:16:21.364 } 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "method": "bdev_malloc_create", 00:16:21.364 "params": { 00:16:21.364 "name": "malloc0", 00:16:21.364 "num_blocks": 8192, 00:16:21.364 "block_size": 4096, 00:16:21.364 "physical_block_size": 4096, 00:16:21.364 "uuid": "5ee27a29-3056-4ceb-b47e-6a5de98eaaf6", 00:16:21.364 "optimal_io_boundary": 0, 00:16:21.364 "md_size": 0, 00:16:21.364 "dif_type": 0, 00:16:21.364 "dif_is_head_of_md": false, 00:16:21.364 "dif_pi_format": 0 00:16:21.364 } 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "method": "bdev_wait_for_examine" 00:16:21.364 } 00:16:21.364 ] 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "scsi", 00:16:21.364 "config": null 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "scheduler", 00:16:21.364 "config": [ 00:16:21.364 { 00:16:21.364 "method": "framework_set_scheduler", 00:16:21.364 "params": { 00:16:21.364 "name": "static" 00:16:21.364 } 00:16:21.364 } 00:16:21.364 ] 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "vhost_scsi", 00:16:21.364 "config": [] 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "vhost_blk", 00:16:21.364 "config": [] 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "ublk", 00:16:21.364 "config": [ 00:16:21.364 { 00:16:21.364 "method": "ublk_create_target", 00:16:21.364 "params": { 00:16:21.364 "cpumask": "1" 00:16:21.364 } 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "method": "ublk_start_disk", 00:16:21.364 "params": { 00:16:21.364 "bdev_name": "malloc0", 00:16:21.364 "ublk_id": 0, 00:16:21.364 "num_queues": 1, 00:16:21.364 "queue_depth": 128 00:16:21.364 } 00:16:21.364 } 00:16:21.364 ] 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "nbd", 00:16:21.364 "config": [] 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "nvmf", 00:16:21.364 "config": [ 00:16:21.364 { 00:16:21.364 "method": "nvmf_set_config", 00:16:21.364 "params": { 00:16:21.364 "discovery_filter": "match_any", 00:16:21.364 "admin_cmd_passthru": { 00:16:21.364 "identify_ctrlr": false 00:16:21.364 }, 00:16:21.364 "dhchap_digests": [ 00:16:21.364 "sha256", 00:16:21.364 "sha384", 00:16:21.364 "sha512" 00:16:21.364 ], 00:16:21.364 "dhchap_dhgroups": [ 00:16:21.364 "null", 00:16:21.364 "ffdhe2048", 00:16:21.364 "ffdhe3072", 00:16:21.364 "ffdhe4096", 00:16:21.364 "ffdhe6144", 00:16:21.364 "ffdhe8192" 00:16:21.364 ] 00:16:21.364 } 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "method": "nvmf_set_max_subsystems", 00:16:21.364 "params": { 00:16:21.364 "max_subsystems": 1024 00:16:21.364 } 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "method": "nvmf_set_crdt", 00:16:21.364 "params": { 00:16:21.364 "crdt1": 0, 00:16:21.364 "crdt2": 0, 00:16:21.364 "crdt3": 0 00:16:21.364 } 00:16:21.364 } 00:16:21.364 ] 00:16:21.364 }, 00:16:21.364 { 00:16:21.364 "subsystem": "iscsi", 00:16:21.364 "config": [ 00:16:21.364 { 00:16:21.364 "method": "iscsi_set_options", 00:16:21.364 "params": { 00:16:21.364 "node_base": "iqn.2016-06.io.spdk", 00:16:21.364 "max_sessions": 128, 00:16:21.364 "max_connections_per_session": 2, 00:16:21.364 "max_queue_depth": 64, 00:16:21.364 "default_time2wait": 2, 00:16:21.364 "default_time2retain": 20, 00:16:21.364 "first_burst_length": 8192, 00:16:21.364 "immediate_data": true, 00:16:21.364 "allow_duplicated_isid": false, 00:16:21.364 "error_recovery_level": 0, 00:16:21.364 "nop_timeout": 60, 00:16:21.364 "nop_in_interval": 30, 00:16:21.364 "disable_chap": false, 00:16:21.364 "require_chap": false, 00:16:21.364 "mutual_chap": false, 00:16:21.364 "chap_group": 0, 00:16:21.364 "max_large_datain_per_connection": 64, 00:16:21.364 "max_r2t_per_connection": 4, 00:16:21.364 "pdu_pool_size": 36864, 00:16:21.364 "immediate_data_pool_size": 16384, 00:16:21.364 "data_out_pool_size": 2048 00:16:21.364 } 00:16:21.364 } 00:16:21.364 ] 00:16:21.364 } 00:16:21.364 ] 00:16:21.364 }' 00:16:21.364 23:01:00 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86449 00:16:21.364 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86449 ']' 00:16:21.364 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86449 00:16:21.364 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:21.365 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.365 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86449 00:16:21.365 killing process with pid 86449 00:16:21.365 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.365 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.365 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86449' 00:16:21.365 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86449 00:16:21.365 23:01:00 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86449 00:16:21.936 [2024-11-26 23:01:00.758694] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:21.936 [2024-11-26 23:01:00.794442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:21.936 [2024-11-26 23:01:00.794593] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:21.936 [2024-11-26 23:01:00.802349] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:21.936 [2024-11-26 23:01:00.802413] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:21.936 [2024-11-26 23:01:00.802432] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:21.936 [2024-11-26 23:01:00.802475] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:21.936 [2024-11-26 23:01:00.802637] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:22.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86489 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86489 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86489 ']' 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.197 23:01:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:22.197 "subsystems": [ 00:16:22.197 { 00:16:22.197 "subsystem": "fsdev", 00:16:22.197 "config": [ 00:16:22.197 { 00:16:22.197 "method": "fsdev_set_opts", 00:16:22.197 "params": { 00:16:22.197 "fsdev_io_pool_size": 65535, 00:16:22.197 "fsdev_io_cache_size": 256 00:16:22.197 } 00:16:22.197 } 00:16:22.197 ] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "keyring", 00:16:22.197 "config": [] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "iobuf", 00:16:22.197 "config": [ 00:16:22.197 { 00:16:22.197 "method": "iobuf_set_options", 00:16:22.197 "params": { 00:16:22.197 "small_pool_count": 8192, 00:16:22.197 "large_pool_count": 1024, 00:16:22.197 "small_bufsize": 8192, 00:16:22.197 "large_bufsize": 135168, 00:16:22.197 "enable_numa": false 00:16:22.197 } 00:16:22.197 } 00:16:22.197 ] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "sock", 00:16:22.197 "config": [ 00:16:22.197 { 00:16:22.197 "method": "sock_set_default_impl", 00:16:22.197 "params": { 00:16:22.197 "impl_name": "posix" 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "sock_impl_set_options", 00:16:22.197 "params": { 00:16:22.197 "impl_name": "ssl", 00:16:22.197 "recv_buf_size": 4096, 00:16:22.197 "send_buf_size": 4096, 00:16:22.197 "enable_recv_pipe": true, 00:16:22.197 "enable_quickack": false, 00:16:22.197 "enable_placement_id": 0, 00:16:22.197 "enable_zerocopy_send_server": true, 00:16:22.197 "enable_zerocopy_send_client": false, 00:16:22.197 "zerocopy_threshold": 0, 00:16:22.197 "tls_version": 0, 00:16:22.197 "enable_ktls": false 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "sock_impl_set_options", 00:16:22.197 "params": { 00:16:22.197 "impl_name": "posix", 00:16:22.197 "recv_buf_size": 2097152, 00:16:22.197 "send_buf_size": 2097152, 00:16:22.197 "enable_recv_pipe": true, 00:16:22.197 "enable_quickack": false, 00:16:22.197 "enable_placement_id": 0, 00:16:22.197 "enable_zerocopy_send_server": true, 00:16:22.197 "enable_zerocopy_send_client": false, 00:16:22.197 "zerocopy_threshold": 0, 00:16:22.197 "tls_version": 0, 00:16:22.197 "enable_ktls": false 00:16:22.197 } 00:16:22.197 } 00:16:22.197 ] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "vmd", 00:16:22.197 "config": [] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "accel", 00:16:22.197 "config": [ 00:16:22.197 { 00:16:22.197 "method": "accel_set_options", 00:16:22.197 "params": { 00:16:22.197 "small_cache_size": 128, 00:16:22.197 "large_cache_size": 16, 00:16:22.197 "task_count": 2048, 00:16:22.197 "sequence_count": 2048, 00:16:22.197 "buf_count": 2048 00:16:22.197 } 00:16:22.197 } 00:16:22.197 ] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "bdev", 00:16:22.197 "config": [ 00:16:22.197 { 00:16:22.197 "method": "bdev_set_options", 00:16:22.197 "params": { 00:16:22.197 "bdev_io_pool_size": 65535, 00:16:22.197 "bdev_io_cache_size": 256, 00:16:22.197 "bdev_auto_examine": true, 00:16:22.197 "iobuf_small_cache_size": 128, 00:16:22.197 "iobuf_large_cache_size": 16 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "bdev_raid_set_options", 00:16:22.197 "params": { 00:16:22.197 "process_window_size_kb": 1024, 00:16:22.197 "process_max_bandwidth_mb_sec": 0 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "bdev_iscsi_set_options", 00:16:22.197 "params": { 00:16:22.197 "timeout_sec": 30 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "bdev_nvme_set_options", 00:16:22.197 "params": { 00:16:22.197 "action_on_timeout": "none", 00:16:22.197 "timeout_us": 0, 00:16:22.197 "timeout_admin_us": 0, 00:16:22.197 "keep_alive_timeout_ms": 10000, 00:16:22.197 "arbitration_burst": 0, 00:16:22.197 "low_priority_weight": 0, 00:16:22.197 "medium_priority_weight": 0, 00:16:22.197 "high_priority_weight": 0, 00:16:22.197 "nvme_adminq_poll_period_us": 10000, 00:16:22.197 "nvme_ioq_poll_period_us": 0, 00:16:22.197 "io_queue_requests": 0, 00:16:22.197 "delay_cmd_submit": true, 00:16:22.197 "transport_retry_count": 4, 00:16:22.197 "bdev_retry_count": 3, 00:16:22.197 "transport_ack_timeout": 0, 00:16:22.197 "ctrlr_loss_timeout_sec": 0, 00:16:22.197 "reconnect_delay_sec": 0, 00:16:22.197 "fast_io_fail_timeout_sec": 0, 00:16:22.197 "disable_auto_failback": false, 00:16:22.197 "generate_uuids": false, 00:16:22.197 "transport_tos": 0, 00:16:22.197 "nvme_error_stat": false, 00:16:22.197 "rdma_srq_size": 0, 00:16:22.197 "io_path_stat": false, 00:16:22.197 "allow_accel_sequence": false, 00:16:22.197 "rdma_max_cq_size": 0, 00:16:22.197 "rdma_cm_event_timeout_ms": 0, 00:16:22.197 "dhchap_digests": [ 00:16:22.197 "sha256", 00:16:22.197 "sha384", 00:16:22.197 "sha512" 00:16:22.197 ], 00:16:22.197 "dhchap_dhgroups": [ 00:16:22.197 "null", 00:16:22.197 "ffdhe2048", 00:16:22.197 "ffdhe3072", 00:16:22.197 "ffdhe4096", 00:16:22.197 "ffdhe6144", 00:16:22.197 "ffdhe8192" 00:16:22.197 ] 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "bdev_nvme_set_hotplug", 00:16:22.197 "params": { 00:16:22.197 "period_us": 100000, 00:16:22.197 "enable": false 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "bdev_malloc_create", 00:16:22.197 "params": { 00:16:22.197 "name": "malloc0", 00:16:22.197 "num_blocks": 8192, 00:16:22.197 "block_size": 4096, 00:16:22.197 "physical_block_size": 4096, 00:16:22.197 "uuid": "5ee27a29-3056-4ceb-b47e-6a5de98eaaf6", 00:16:22.197 "optimal_io_boundary": 0, 00:16:22.197 "md_size": 0, 00:16:22.197 "dif_type": 0, 00:16:22.197 "dif_is_head_of_md": false, 00:16:22.197 "dif_pi_format": 0 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "bdev_wait_for_examine" 00:16:22.197 } 00:16:22.197 ] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "scsi", 00:16:22.197 "config": null 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "scheduler", 00:16:22.197 "config": [ 00:16:22.197 { 00:16:22.197 "method": "framework_set_scheduler", 00:16:22.197 "params": { 00:16:22.197 "name": "static" 00:16:22.197 } 00:16:22.197 } 00:16:22.197 ] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "vhost_scsi", 00:16:22.197 "config": [] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "vhost_blk", 00:16:22.197 "config": [] 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "subsystem": "ublk", 00:16:22.197 "config": [ 00:16:22.197 { 00:16:22.197 "method": "ublk_create_target", 00:16:22.197 "params": { 00:16:22.197 "cpumask": "1" 00:16:22.197 } 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "method": "ublk_start_disk", 00:16:22.197 "params": { 00:16:22.197 "bdev_name": "malloc0", 00:16:22.197 "ublk_id": 0, 00:16:22.197 "num_queues": 1, 00:16:22.197 "queue_depth": 128 00:16:22.198 } 00:16:22.198 } 00:16:22.198 ] 00:16:22.198 }, 00:16:22.198 { 00:16:22.198 "subsystem": "nbd", 00:16:22.198 "config": [] 00:16:22.198 }, 00:16:22.198 { 00:16:22.198 "subsystem": "nvmf", 00:16:22.198 "config": [ 00:16:22.198 { 00:16:22.198 "method": "nvmf_set_config", 00:16:22.198 "params": { 00:16:22.198 "discovery_filter": "match_any", 00:16:22.198 "admin_cmd_passthru": { 00:16:22.198 "identify_ctrlr": false 00:16:22.198 }, 00:16:22.198 "dhchap_digests": [ 00:16:22.198 "sha256", 00:16:22.198 "sha384", 00:16:22.198 "sha512" 00:16:22.198 ], 00:16:22.198 "dhchap_dhgroups": [ 00:16:22.198 "null", 00:16:22.198 "ffdhe2048", 00:16:22.198 "ffdhe3072", 00:16:22.198 "ffdhe4096", 00:16:22.198 "ffdhe6144", 00:16:22.198 "ffdhe8192" 00:16:22.198 ] 00:16:22.198 } 00:16:22.198 }, 00:16:22.198 { 00:16:22.198 "method": "nvmf_set_max_subsystems", 00:16:22.198 "params": { 00:16:22.198 "max_subsystems": 1024 00:16:22.198 } 00:16:22.198 }, 00:16:22.198 { 00:16:22.198 "method": "nvmf_set_crdt", 00:16:22.198 "params": { 00:16:22.198 "crdt1": 0, 00:16:22.198 "crdt2": 0, 00:16:22.198 "crdt3": 0 00:16:22.198 } 00:16:22.198 } 00:16:22.198 ] 00:16:22.198 }, 00:16:22.198 { 00:16:22.198 "subsystem": "iscsi", 00:16:22.198 "config": [ 00:16:22.198 { 00:16:22.198 "method": "iscsi_set_options", 00:16:22.198 "params": { 00:16:22.198 "node_base": "iqn.2016-06.io.spdk", 00:16:22.198 "max_sessions": 128, 00:16:22.198 "max_connections_per_session": 2, 00:16:22.198 "max_queue_depth": 64, 00:16:22.198 "default_time2wait": 2, 00:16:22.198 "default_time2retain": 20, 00:16:22.198 "first_burst_length": 8192, 00:16:22.198 "immediate_data": true, 00:16:22.198 "allow_duplicated_isid": false, 00:16:22.198 "error_recovery_level": 0, 00:16:22.198 "nop_timeout": 60, 00:16:22.198 "nop_in_interval": 30, 00:16:22.198 "disable_chap": false, 00:16:22.198 "require_chap": false, 00:16:22.198 "mutual_chap": false, 00:16:22.198 "chap_group": 0, 00:16:22.198 "max_large_datain_per_connection": 64, 00:16:22.198 "max_r2t_per_connection": 4, 00:16:22.198 "pdu_pool_size": 36864, 00:16:22.198 "immediate_data_pool_size": 16384, 00:16:22.198 "data_out_pool_size": 2048 00:16:22.198 } 00:16:22.198 } 00:16:22.198 ] 00:16:22.198 } 00:16:22.198 ] 00:16:22.198 }' 00:16:22.198 23:01:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:22.198 23:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.460 [2024-11-26 23:01:01.365626] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:22.460 [2024-11-26 23:01:01.365787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86489 ] 00:16:22.460 [2024-11-26 23:01:01.506827] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:22.460 [2024-11-26 23:01:01.537538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.460 [2024-11-26 23:01:01.565472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.031 [2024-11-26 23:01:01.938320] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:23.031 [2024-11-26 23:01:01.938691] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:23.031 [2024-11-26 23:01:01.946462] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:23.031 [2024-11-26 23:01:01.946554] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:23.031 [2024-11-26 23:01:01.946564] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:23.031 [2024-11-26 23:01:01.946572] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:23.031 [2024-11-26 23:01:01.955420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:23.031 [2024-11-26 23:01:01.955459] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:23.031 [2024-11-26 23:01:01.962341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:23.031 [2024-11-26 23:01:01.962464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:23.031 [2024-11-26 23:01:01.979324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:23.292 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.292 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86489 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86489 ']' 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86489 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86489 00:16:23.293 killing process with pid 86489 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86489' 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86489 00:16:23.293 23:01:02 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86489 00:16:23.554 [2024-11-26 23:01:02.572864] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:23.554 [2024-11-26 23:01:02.611432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:23.554 [2024-11-26 23:01:02.611582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:23.554 [2024-11-26 23:01:02.618340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:23.554 [2024-11-26 23:01:02.618412] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:23.554 [2024-11-26 23:01:02.618422] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:23.554 [2024-11-26 23:01:02.618455] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:23.554 [2024-11-26 23:01:02.618616] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:24.126 23:01:03 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:24.126 ************************************ 00:16:24.126 END TEST test_save_ublk_config 00:16:24.126 ************************************ 00:16:24.126 00:16:24.126 real 0m3.964s 00:16:24.126 user 0m2.699s 00:16:24.126 sys 0m1.964s 00:16:24.126 23:01:03 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.126 23:01:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.126 23:01:03 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86540 00:16:24.126 23:01:03 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:24.126 23:01:03 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86540 00:16:24.126 23:01:03 ublk -- common/autotest_common.sh@835 -- # '[' -z 86540 ']' 00:16:24.126 23:01:03 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.126 23:01:03 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.126 23:01:03 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.126 23:01:03 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:24.126 23:01:03 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.126 23:01:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.126 [2024-11-26 23:01:03.232648] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:24.126 [2024-11-26 23:01:03.232790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86540 ] 00:16:24.397 [2024-11-26 23:01:03.372251] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:24.397 [2024-11-26 23:01:03.402416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.397 [2024-11-26 23:01:03.432915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.397 [2024-11-26 23:01:03.432960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.981 23:01:04 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.981 23:01:04 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:24.981 23:01:04 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:24.981 23:01:04 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.981 23:01:04 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.981 23:01:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.981 ************************************ 00:16:24.981 START TEST test_create_ublk 00:16:24.981 ************************************ 00:16:24.981 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:24.981 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:24.981 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.981 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.981 [2024-11-26 23:01:04.097325] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:24.981 [2024-11-26 23:01:04.099035] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:24.981 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.981 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:24.981 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:24.981 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.981 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.243 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:25.243 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.243 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.243 [2024-11-26 23:01:04.190489] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:25.243 [2024-11-26 23:01:04.190963] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:25.243 [2024-11-26 23:01:04.190979] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:25.243 [2024-11-26 23:01:04.190987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:25.243 [2024-11-26 23:01:04.199646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:25.243 [2024-11-26 23:01:04.199689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:25.243 [2024-11-26 23:01:04.206339] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:25.243 [2024-11-26 23:01:04.207067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:25.243 [2024-11-26 23:01:04.222431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:25.243 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:25.243 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.243 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.243 23:01:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:25.243 { 00:16:25.243 "ublk_device": "/dev/ublkb0", 00:16:25.243 "id": 0, 00:16:25.243 "queue_depth": 512, 00:16:25.243 "num_queues": 4, 00:16:25.243 "bdev_name": "Malloc0" 00:16:25.243 } 00:16:25.243 ]' 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:25.243 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:25.505 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:25.505 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:25.505 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:25.505 23:01:04 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.505 23:01:04 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:25.505 fio: verification read phase will never start because write phase uses all of runtime 00:16:25.505 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:25.505 fio-3.35 00:16:25.505 Starting 1 process 00:16:35.512 00:16:35.512 fio_test: (groupid=0, jobs=1): err= 0: pid=86584: Tue Nov 26 23:01:14 2024 00:16:35.512 write: IOPS=17.1k, BW=66.6MiB/s (69.9MB/s)(666MiB/10001msec); 0 zone resets 00:16:35.512 clat (usec): min=39, max=3977, avg=57.81, stdev=93.84 00:16:35.512 lat (usec): min=39, max=3978, avg=58.27, stdev=93.86 00:16:35.512 clat percentiles (usec): 00:16:35.512 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 49], 00:16:35.512 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:16:35.512 | 70.00th=[ 55], 80.00th=[ 57], 90.00th=[ 66], 95.00th=[ 73], 00:16:35.512 | 99.00th=[ 95], 99.50th=[ 169], 99.90th=[ 1778], 99.95th=[ 2868], 00:16:35.512 | 99.99th=[ 3523] 00:16:35.512 bw ( KiB/s): min=47328, max=73376, per=99.52%, avg=67895.16, stdev=6973.83, samples=19 00:16:35.512 iops : min=11832, max=18344, avg=16973.79, stdev=1743.46, samples=19 00:16:35.512 lat (usec) : 50=36.03%, 100=63.18%, 250=0.57%, 500=0.06%, 750=0.01% 00:16:35.512 lat (usec) : 1000=0.01% 00:16:35.512 lat (msec) : 2=0.05%, 4=0.09% 00:16:35.512 cpu : usr=3.17%, sys=17.72%, ctx=170577, majf=0, minf=796 00:16:35.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.512 issued rwts: total=0,170575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.512 00:16:35.512 Run status group 0 (all jobs): 00:16:35.512 WRITE: bw=66.6MiB/s (69.9MB/s), 66.6MiB/s-66.6MiB/s (69.9MB/s-69.9MB/s), io=666MiB (699MB), run=10001-10001msec 00:16:35.512 00:16:35.512 Disk stats (read/write): 00:16:35.512 ublkb0: ios=0/168656, merge=0/0, ticks=0/7534, in_queue=7534, util=99.09% 00:16:35.512 23:01:14 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:35.512 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.512 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.512 [2024-11-26 23:01:14.634238] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:35.772 [2024-11-26 23:01:14.671346] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:35.772 [2024-11-26 23:01:14.671905] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:35.772 [2024-11-26 23:01:14.680346] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:35.772 [2024-11-26 23:01:14.680563] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:35.772 [2024-11-26 23:01:14.680573] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.772 23:01:14 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.772 [2024-11-26 23:01:14.695392] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:35.772 request: 00:16:35.772 { 00:16:35.772 "ublk_id": 0, 00:16:35.772 "method": "ublk_stop_disk", 00:16:35.772 "req_id": 1 00:16:35.772 } 00:16:35.772 Got JSON-RPC error response 00:16:35.772 response: 00:16:35.772 { 00:16:35.772 "code": -19, 00:16:35.772 "message": "No such device" 00:16:35.772 } 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.772 23:01:14 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.772 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.772 [2024-11-26 23:01:14.711365] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:35.773 [2024-11-26 23:01:14.712679] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:35.773 [2024-11-26 23:01:14.712709] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.773 23:01:14 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.773 23:01:14 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:35.773 ************************************ 00:16:35.773 END TEST test_create_ublk 00:16:35.773 ************************************ 00:16:35.773 23:01:14 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:35.773 00:16:35.773 real 0m10.787s 00:16:35.773 user 0m0.601s 00:16:35.773 sys 0m1.867s 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.773 23:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.033 23:01:14 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:36.033 23:01:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:36.033 23:01:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.033 23:01:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.033 ************************************ 00:16:36.033 START TEST test_create_multi_ublk 00:16:36.033 ************************************ 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.033 [2024-11-26 23:01:14.922308] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:36.033 [2024-11-26 23:01:14.923187] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.033 23:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.033 [2024-11-26 23:01:14.994421] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:36.033 [2024-11-26 23:01:14.994721] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:36.033 [2024-11-26 23:01:14.994737] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:36.033 [2024-11-26 23:01:14.994743] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.033 [2024-11-26 23:01:15.018319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.033 [2024-11-26 23:01:15.018340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.033 [2024-11-26 23:01:15.030318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.033 [2024-11-26 23:01:15.030802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:36.033 [2024-11-26 23:01:15.066317] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.033 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.033 [2024-11-26 23:01:15.150400] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:36.033 [2024-11-26 23:01:15.150691] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:36.033 [2024-11-26 23:01:15.150705] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:36.033 [2024-11-26 23:01:15.150710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.293 [2024-11-26 23:01:15.162328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.293 [2024-11-26 23:01:15.162345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.293 [2024-11-26 23:01:15.174322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.293 [2024-11-26 23:01:15.174816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:36.293 [2024-11-26 23:01:15.210321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.293 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.294 [2024-11-26 23:01:15.294406] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:36.294 [2024-11-26 23:01:15.294694] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:36.294 [2024-11-26 23:01:15.294706] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:36.294 [2024-11-26 23:01:15.294717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.294 [2024-11-26 23:01:15.306321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.294 [2024-11-26 23:01:15.306341] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.294 [2024-11-26 23:01:15.318315] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.294 [2024-11-26 23:01:15.318801] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:36.294 [2024-11-26 23:01:15.331331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.294 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.294 [2024-11-26 23:01:15.414402] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:36.294 [2024-11-26 23:01:15.414690] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:36.294 [2024-11-26 23:01:15.414703] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:36.294 [2024-11-26 23:01:15.414708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.554 [2024-11-26 23:01:15.426332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.554 [2024-11-26 23:01:15.426349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.554 [2024-11-26 23:01:15.438332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.554 [2024-11-26 23:01:15.438810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:36.554 [2024-11-26 23:01:15.478327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:36.554 { 00:16:36.554 "ublk_device": "/dev/ublkb0", 00:16:36.554 "id": 0, 00:16:36.554 "queue_depth": 512, 00:16:36.554 "num_queues": 4, 00:16:36.554 "bdev_name": "Malloc0" 00:16:36.554 }, 00:16:36.554 { 00:16:36.554 "ublk_device": "/dev/ublkb1", 00:16:36.554 "id": 1, 00:16:36.554 "queue_depth": 512, 00:16:36.554 "num_queues": 4, 00:16:36.554 "bdev_name": "Malloc1" 00:16:36.554 }, 00:16:36.554 { 00:16:36.554 "ublk_device": "/dev/ublkb2", 00:16:36.554 "id": 2, 00:16:36.554 "queue_depth": 512, 00:16:36.554 "num_queues": 4, 00:16:36.554 "bdev_name": "Malloc2" 00:16:36.554 }, 00:16:36.554 { 00:16:36.554 "ublk_device": "/dev/ublkb3", 00:16:36.554 "id": 3, 00:16:36.554 "queue_depth": 512, 00:16:36.554 "num_queues": 4, 00:16:36.554 "bdev_name": "Malloc3" 00:16:36.554 } 00:16:36.554 ]' 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.554 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:36.813 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:37.073 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.073 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:37.073 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:37.073 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.073 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:37.073 23:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.073 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.073 [2024-11-26 23:01:16.126382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.073 [2024-11-26 23:01:16.174357] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.073 [2024-11-26 23:01:16.174996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.073 [2024-11-26 23:01:16.186337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.073 [2024-11-26 23:01:16.186553] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:37.073 [2024-11-26 23:01:16.186562] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.332 [2024-11-26 23:01:16.210385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.332 [2024-11-26 23:01:16.244679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.332 [2024-11-26 23:01:16.245705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.332 [2024-11-26 23:01:16.254320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.332 [2024-11-26 23:01:16.254534] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:37.332 [2024-11-26 23:01:16.254543] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.332 [2024-11-26 23:01:16.278360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.332 [2024-11-26 23:01:16.317772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.332 [2024-11-26 23:01:16.318674] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.332 [2024-11-26 23:01:16.326319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.332 [2024-11-26 23:01:16.326519] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:37.332 [2024-11-26 23:01:16.326532] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.332 [2024-11-26 23:01:16.345383] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.332 [2024-11-26 23:01:16.408738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.332 [2024-11-26 23:01:16.409723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.332 [2024-11-26 23:01:16.414317] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.332 [2024-11-26 23:01:16.414521] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:37.332 [2024-11-26 23:01:16.414533] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.332 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:37.589 [2024-11-26 23:01:16.606374] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:37.589 [2024-11-26 23:01:16.607486] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:37.589 [2024-11-26 23:01:16.607512] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:37.589 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:37.589 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.589 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:37.589 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.589 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.589 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.590 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.590 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:37.590 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.590 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:37.849 ************************************ 00:16:37.849 END TEST test_create_multi_ublk 00:16:37.849 ************************************ 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:37.849 00:16:37.849 real 0m2.046s 00:16:37.849 user 0m0.782s 00:16:37.849 sys 0m0.136s 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.849 23:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.110 23:01:16 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:38.110 23:01:16 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:38.110 23:01:16 ublk -- ublk/ublk.sh@130 -- # killprocess 86540 00:16:38.110 23:01:16 ublk -- common/autotest_common.sh@954 -- # '[' -z 86540 ']' 00:16:38.110 23:01:16 ublk -- common/autotest_common.sh@958 -- # kill -0 86540 00:16:38.110 23:01:16 ublk -- common/autotest_common.sh@959 -- # uname 00:16:38.110 23:01:17 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.110 23:01:17 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86540 00:16:38.110 killing process with pid 86540 00:16:38.110 23:01:17 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.110 23:01:17 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.110 23:01:17 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86540' 00:16:38.110 23:01:17 ublk -- common/autotest_common.sh@973 -- # kill 86540 00:16:38.110 23:01:17 ublk -- common/autotest_common.sh@978 -- # wait 86540 00:16:38.110 [2024-11-26 23:01:17.185684] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:38.110 [2024-11-26 23:01:17.185741] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:38.370 00:16:38.370 real 0m18.518s 00:16:38.370 user 0m27.584s 00:16:38.370 sys 0m9.159s 00:16:38.370 23:01:17 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.370 23:01:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 ************************************ 00:16:38.370 END TEST ublk 00:16:38.370 ************************************ 00:16:38.370 23:01:17 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:38.370 23:01:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.370 23:01:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.370 23:01:17 -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 ************************************ 00:16:38.370 START TEST ublk_recovery 00:16:38.370 ************************************ 00:16:38.370 23:01:17 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:38.629 * Looking for test storage... 00:16:38.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.629 23:01:17 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.629 --rc genhtml_branch_coverage=1 00:16:38.629 --rc genhtml_function_coverage=1 00:16:38.629 --rc genhtml_legend=1 00:16:38.629 --rc geninfo_all_blocks=1 00:16:38.629 --rc geninfo_unexecuted_blocks=1 00:16:38.629 00:16:38.629 ' 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.629 --rc genhtml_branch_coverage=1 00:16:38.629 --rc genhtml_function_coverage=1 00:16:38.629 --rc genhtml_legend=1 00:16:38.629 --rc geninfo_all_blocks=1 00:16:38.629 --rc geninfo_unexecuted_blocks=1 00:16:38.629 00:16:38.629 ' 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.629 --rc genhtml_branch_coverage=1 00:16:38.629 --rc genhtml_function_coverage=1 00:16:38.629 --rc genhtml_legend=1 00:16:38.629 --rc geninfo_all_blocks=1 00:16:38.629 --rc geninfo_unexecuted_blocks=1 00:16:38.629 00:16:38.629 ' 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.629 --rc genhtml_branch_coverage=1 00:16:38.629 --rc genhtml_function_coverage=1 00:16:38.629 --rc genhtml_legend=1 00:16:38.629 --rc geninfo_all_blocks=1 00:16:38.629 --rc geninfo_unexecuted_blocks=1 00:16:38.629 00:16:38.629 ' 00:16:38.629 23:01:17 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:38.629 23:01:17 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:38.629 23:01:17 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:38.629 23:01:17 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86910 00:16:38.629 23:01:17 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:38.629 23:01:17 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86910 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86910 ']' 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.629 23:01:17 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.630 23:01:17 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.630 23:01:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.630 23:01:17 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:38.630 [2024-11-26 23:01:17.710698] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:38.630 [2024-11-26 23:01:17.710830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86910 ] 00:16:38.889 [2024-11-26 23:01:17.847821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:38.889 [2024-11-26 23:01:17.873067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:38.889 [2024-11-26 23:01:17.897326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.889 [2024-11-26 23:01:17.897420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:39.460 23:01:18 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.460 [2024-11-26 23:01:18.549313] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:39.460 [2024-11-26 23:01:18.550243] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.460 23:01:18 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.460 malloc0 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.460 23:01:18 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.460 23:01:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.460 [2024-11-26 23:01:18.581423] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:39.460 [2024-11-26 23:01:18.581506] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:39.460 [2024-11-26 23:01:18.581515] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:39.460 [2024-11-26 23:01:18.581520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.721 [2024-11-26 23:01:18.590390] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.721 [2024-11-26 23:01:18.590408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.721 [2024-11-26 23:01:18.593313] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.721 [2024-11-26 23:01:18.593423] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:39.721 [2024-11-26 23:01:18.610313] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.721 1 00:16:39.721 23:01:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.721 23:01:18 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:40.668 23:01:19 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86938 00:16:40.669 23:01:19 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:40.669 23:01:19 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:40.669 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:40.669 fio-3.35 00:16:40.669 Starting 1 process 00:16:45.959 23:01:24 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86910 00:16:45.959 23:01:24 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:51.337 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86910 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:51.337 23:01:29 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87048 00:16:51.337 23:01:29 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:51.337 23:01:29 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.337 23:01:29 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87048 00:16:51.337 23:01:29 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 87048 ']' 00:16:51.337 23:01:29 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.337 23:01:29 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.337 23:01:29 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.337 23:01:29 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.337 23:01:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.337 [2024-11-26 23:01:29.719439] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:16:51.337 [2024-11-26 23:01:29.719595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87048 ] 00:16:51.337 [2024-11-26 23:01:29.861751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:51.337 [2024-11-26 23:01:29.885688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:51.337 [2024-11-26 23:01:29.911555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.337 [2024-11-26 23:01:29.911638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:51.598 23:01:30 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.598 [2024-11-26 23:01:30.561314] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.598 [2024-11-26 23:01:30.562249] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.598 23:01:30 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.598 malloc0 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.598 23:01:30 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.598 [2024-11-26 23:01:30.593434] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:51.598 [2024-11-26 23:01:30.593471] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:51.598 [2024-11-26 23:01:30.593479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:51.598 [2024-11-26 23:01:30.601339] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:51.598 [2024-11-26 23:01:30.601364] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:51.598 1 00:16:51.598 23:01:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.598 23:01:30 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86938 00:16:52.541 [2024-11-26 23:01:31.601381] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:52.542 [2024-11-26 23:01:31.606320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:52.542 [2024-11-26 23:01:31.606334] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:53.484 [2024-11-26 23:01:32.606377] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:53.746 [2024-11-26 23:01:32.614324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:53.746 [2024-11-26 23:01:32.614367] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:54.702 [2024-11-26 23:01:33.614395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:54.702 [2024-11-26 23:01:33.619327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:54.702 [2024-11-26 23:01:33.619343] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:54.702 [2024-11-26 23:01:33.619355] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:54.702 [2024-11-26 23:01:33.619452] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:16.632 [2024-11-26 23:01:54.774330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:16.632 [2024-11-26 23:01:54.780867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:16.632 [2024-11-26 23:01:54.788530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:16.632 [2024-11-26 23:01:54.788548] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:43.170 00:17:43.170 fio_test: (groupid=0, jobs=1): err= 0: pid=86945: Tue Nov 26 23:02:19 2024 00:17:43.170 read: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(3321MiB/60003msec) 00:17:43.170 slat (nsec): min=971, max=344920, avg=5449.89, stdev=1463.75 00:17:43.170 clat (usec): min=1150, max=30180k, avg=4185.12, stdev=244901.50 00:17:43.170 lat (usec): min=1160, max=30180k, avg=4190.57, stdev=244901.49 00:17:43.170 clat percentiles (usec): 00:17:43.170 | 1.00th=[ 1729], 5.00th=[ 1811], 10.00th=[ 1893], 20.00th=[ 2008], 00:17:43.170 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:17:43.170 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2212], 95.00th=[ 3195], 00:17:43.170 | 99.00th=[ 5276], 99.50th=[ 5669], 99.90th=[ 7046], 99.95th=[ 8094], 00:17:43.170 | 99.99th=[13173] 00:17:43.170 bw ( KiB/s): min= 6024, max=135136, per=100.00%, avg=111638.00, stdev=18512.21, samples=60 00:17:43.170 iops : min= 1506, max=33784, avg=27909.50, stdev=4628.05, samples=60 00:17:43.170 write: IOPS=14.1k, BW=55.3MiB/s (58.0MB/s)(3317MiB/60003msec); 0 zone resets 00:17:43.170 slat (nsec): min=1144, max=868901, avg=5635.04, stdev=1763.56 00:17:43.170 clat (usec): min=1231, max=30180k, avg=4842.80, stdev=277880.42 00:17:43.170 lat (usec): min=1245, max=30180k, avg=4848.44, stdev=277880.41 00:17:43.170 clat percentiles (usec): 00:17:43.170 | 1.00th=[ 1795], 5.00th=[ 1893], 10.00th=[ 1991], 20.00th=[ 2114], 00:17:43.170 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2180], 60.00th=[ 2212], 00:17:43.170 | 70.00th=[ 2245], 80.00th=[ 2245], 90.00th=[ 2311], 95.00th=[ 3130], 00:17:43.170 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7111], 99.95th=[ 8094], 00:17:43.170 | 99.99th=[13173] 00:17:43.171 bw ( KiB/s): min= 5984, max=133688, per=100.00%, avg=111497.33, stdev=18578.63, samples=60 00:17:43.171 iops : min= 1496, max=33422, avg=27874.33, stdev=4644.66, samples=60 00:17:43.171 lat (msec) : 2=15.49%, 4=81.58%, 10=2.89%, 20=0.02%, >=2000=0.01% 00:17:43.171 cpu : usr=3.22%, sys=16.00%, ctx=55603, majf=0, minf=13 00:17:43.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:43.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.171 issued rwts: total=850214,849051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.171 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.171 00:17:43.171 Run status group 0 (all jobs): 00:17:43.171 READ: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=3321MiB (3482MB), run=60003-60003msec 00:17:43.171 WRITE: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=3317MiB (3478MB), run=60003-60003msec 00:17:43.171 00:17:43.171 Disk stats (read/write): 00:17:43.171 ublkb1: ios=847133/846042, merge=0/0, ticks=3504492/3984802, in_queue=7489294, util=99.91% 00:17:43.171 23:02:19 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.171 [2024-11-26 23:02:19.865135] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:43.171 [2024-11-26 23:02:19.903329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:43.171 [2024-11-26 23:02:19.903493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:43.171 [2024-11-26 23:02:19.912323] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:43.171 [2024-11-26 23:02:19.912448] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:43.171 [2024-11-26 23:02:19.912456] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.171 23:02:19 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.171 [2024-11-26 23:02:19.920386] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:43.171 [2024-11-26 23:02:19.921580] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:43.171 [2024-11-26 23:02:19.921613] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.171 23:02:19 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:43.171 23:02:19 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:43.171 23:02:19 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87048 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 87048 ']' 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 87048 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87048 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.171 killing process with pid 87048 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87048' 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@973 -- # kill 87048 00:17:43.171 23:02:19 ublk_recovery -- common/autotest_common.sh@978 -- # wait 87048 00:17:43.171 [2024-11-26 23:02:20.192657] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:43.171 [2024-11-26 23:02:20.192717] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:43.171 ************************************ 00:17:43.171 END TEST ublk_recovery 00:17:43.171 ************************************ 00:17:43.171 00:17:43.171 real 1m3.069s 00:17:43.171 user 1m42.699s 00:17:43.171 sys 0m24.440s 00:17:43.171 23:02:20 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.171 23:02:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.171 23:02:20 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:43.171 23:02:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:43.171 23:02:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.171 23:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:43.171 23:02:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:43.171 23:02:20 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:43.171 23:02:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:43.171 23:02:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.171 23:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:43.171 ************************************ 00:17:43.171 START TEST ftl 00:17:43.171 ************************************ 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:43.171 * Looking for test storage... 00:17:43.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.171 23:02:20 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.171 23:02:20 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.171 23:02:20 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.171 23:02:20 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.171 23:02:20 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.171 23:02:20 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.171 23:02:20 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.171 23:02:20 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:43.171 23:02:20 ftl -- scripts/common.sh@345 -- # : 1 00:17:43.171 23:02:20 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.171 23:02:20 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.171 23:02:20 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:43.171 23:02:20 ftl -- scripts/common.sh@353 -- # local d=1 00:17:43.171 23:02:20 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.171 23:02:20 ftl -- scripts/common.sh@355 -- # echo 1 00:17:43.171 23:02:20 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.171 23:02:20 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@353 -- # local d=2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.171 23:02:20 ftl -- scripts/common.sh@355 -- # echo 2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.171 23:02:20 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.171 23:02:20 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.171 23:02:20 ftl -- scripts/common.sh@368 -- # return 0 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.171 --rc genhtml_branch_coverage=1 00:17:43.171 --rc genhtml_function_coverage=1 00:17:43.171 --rc genhtml_legend=1 00:17:43.171 --rc geninfo_all_blocks=1 00:17:43.171 --rc geninfo_unexecuted_blocks=1 00:17:43.171 00:17:43.171 ' 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.171 --rc genhtml_branch_coverage=1 00:17:43.171 --rc genhtml_function_coverage=1 00:17:43.171 --rc genhtml_legend=1 00:17:43.171 --rc geninfo_all_blocks=1 00:17:43.171 --rc geninfo_unexecuted_blocks=1 00:17:43.171 00:17:43.171 ' 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.171 --rc genhtml_branch_coverage=1 00:17:43.171 --rc genhtml_function_coverage=1 00:17:43.171 --rc genhtml_legend=1 00:17:43.171 --rc geninfo_all_blocks=1 00:17:43.171 --rc geninfo_unexecuted_blocks=1 00:17:43.171 00:17:43.171 ' 00:17:43.171 23:02:20 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.171 --rc genhtml_branch_coverage=1 00:17:43.171 --rc genhtml_function_coverage=1 00:17:43.171 --rc genhtml_legend=1 00:17:43.171 --rc geninfo_all_blocks=1 00:17:43.171 --rc geninfo_unexecuted_blocks=1 00:17:43.171 00:17:43.171 ' 00:17:43.171 23:02:20 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:43.171 23:02:20 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:43.171 23:02:20 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.171 23:02:20 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.171 23:02:20 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:43.171 23:02:20 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:43.171 23:02:20 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.171 23:02:20 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:43.171 23:02:20 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:43.171 23:02:20 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.172 23:02:20 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.172 23:02:20 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:43.172 23:02:20 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:43.172 23:02:20 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:43.172 23:02:20 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:43.172 23:02:20 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:43.172 23:02:20 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:43.172 23:02:20 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.172 23:02:20 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.172 23:02:20 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:43.172 23:02:20 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:43.172 23:02:20 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:43.172 23:02:20 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:43.172 23:02:20 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:43.172 23:02:20 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:43.172 23:02:20 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:43.172 23:02:20 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:43.172 23:02:20 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:43.172 23:02:20 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:43.172 23:02:20 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.172 23:02:20 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:43.172 23:02:20 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:43.172 23:02:20 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:43.172 23:02:20 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:43.172 23:02:20 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:43.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:43.172 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.172 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.172 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.172 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.172 23:02:21 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87850 00:17:43.172 23:02:21 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87850 00:17:43.172 23:02:21 ftl -- common/autotest_common.sh@835 -- # '[' -z 87850 ']' 00:17:43.172 23:02:21 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.172 23:02:21 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.172 23:02:21 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.172 23:02:21 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:43.172 23:02:21 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.172 23:02:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:43.172 [2024-11-26 23:02:21.469415] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:43.172 [2024-11-26 23:02:21.469561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87850 ] 00:17:43.172 [2024-11-26 23:02:21.607711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:43.172 [2024-11-26 23:02:21.634180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.172 [2024-11-26 23:02:21.675317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.433 23:02:22 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.433 23:02:22 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:43.433 23:02:22 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:43.433 23:02:22 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:44.007 23:02:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:44.007 23:02:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@50 -- # break 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:44.578 23:02:23 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:44.579 23:02:23 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:44.579 23:02:23 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:44.840 23:02:23 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:44.840 23:02:23 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:44.840 23:02:23 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:44.840 23:02:23 ftl -- ftl/ftl.sh@63 -- # break 00:17:44.840 23:02:23 ftl -- ftl/ftl.sh@66 -- # killprocess 87850 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@954 -- # '[' -z 87850 ']' 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@958 -- # kill -0 87850 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@959 -- # uname 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87850 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.840 killing process with pid 87850 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87850' 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@973 -- # kill 87850 00:17:44.840 23:02:23 ftl -- common/autotest_common.sh@978 -- # wait 87850 00:17:45.411 23:02:24 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:45.411 23:02:24 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:45.411 23:02:24 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:45.411 23:02:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.411 23:02:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:45.411 ************************************ 00:17:45.411 START TEST ftl_fio_basic 00:17:45.411 ************************************ 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:45.411 * Looking for test storage... 00:17:45.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:45.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.411 --rc genhtml_branch_coverage=1 00:17:45.411 --rc genhtml_function_coverage=1 00:17:45.411 --rc genhtml_legend=1 00:17:45.411 --rc geninfo_all_blocks=1 00:17:45.411 --rc geninfo_unexecuted_blocks=1 00:17:45.411 00:17:45.411 ' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:45.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.411 --rc genhtml_branch_coverage=1 00:17:45.411 --rc genhtml_function_coverage=1 00:17:45.411 --rc genhtml_legend=1 00:17:45.411 --rc geninfo_all_blocks=1 00:17:45.411 --rc geninfo_unexecuted_blocks=1 00:17:45.411 00:17:45.411 ' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:45.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.411 --rc genhtml_branch_coverage=1 00:17:45.411 --rc genhtml_function_coverage=1 00:17:45.411 --rc genhtml_legend=1 00:17:45.411 --rc geninfo_all_blocks=1 00:17:45.411 --rc geninfo_unexecuted_blocks=1 00:17:45.411 00:17:45.411 ' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:45.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.411 --rc genhtml_branch_coverage=1 00:17:45.411 --rc genhtml_function_coverage=1 00:17:45.411 --rc genhtml_legend=1 00:17:45.411 --rc geninfo_all_blocks=1 00:17:45.411 --rc geninfo_unexecuted_blocks=1 00:17:45.411 00:17:45.411 ' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:45.411 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87966 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87966 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 87966 ']' 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.412 23:02:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.673 [2024-11-26 23:02:24.556662] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:17:45.673 [2024-11-26 23:02:24.556783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87966 ] 00:17:45.673 [2024-11-26 23:02:24.691405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:45.673 [2024-11-26 23:02:24.718674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:45.673 [2024-11-26 23:02:24.763715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.673 [2024-11-26 23:02:24.765399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.673 [2024-11-26 23:02:24.765428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.627 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:46.628 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:46.897 { 00:17:46.897 "name": "nvme0n1", 00:17:46.897 "aliases": [ 00:17:46.897 "41cc0598-930e-4a10-bc79-94cdd4e0e4b6" 00:17:46.897 ], 00:17:46.897 "product_name": "NVMe disk", 00:17:46.897 "block_size": 4096, 00:17:46.897 "num_blocks": 1310720, 00:17:46.897 "uuid": "41cc0598-930e-4a10-bc79-94cdd4e0e4b6", 00:17:46.897 "numa_id": -1, 00:17:46.897 "assigned_rate_limits": { 00:17:46.897 "rw_ios_per_sec": 0, 00:17:46.897 "rw_mbytes_per_sec": 0, 00:17:46.897 "r_mbytes_per_sec": 0, 00:17:46.897 "w_mbytes_per_sec": 0 00:17:46.897 }, 00:17:46.897 "claimed": false, 00:17:46.897 "zoned": false, 00:17:46.897 "supported_io_types": { 00:17:46.897 "read": true, 00:17:46.897 "write": true, 00:17:46.897 "unmap": true, 00:17:46.897 "flush": true, 00:17:46.897 "reset": true, 00:17:46.897 "nvme_admin": true, 00:17:46.897 "nvme_io": true, 00:17:46.897 "nvme_io_md": false, 00:17:46.897 "write_zeroes": true, 00:17:46.897 "zcopy": false, 00:17:46.897 "get_zone_info": false, 00:17:46.897 "zone_management": false, 00:17:46.897 "zone_append": false, 00:17:46.897 "compare": true, 00:17:46.897 "compare_and_write": false, 00:17:46.897 "abort": true, 00:17:46.897 "seek_hole": false, 00:17:46.897 "seek_data": false, 00:17:46.897 "copy": true, 00:17:46.897 "nvme_iov_md": false 00:17:46.897 }, 00:17:46.897 "driver_specific": { 00:17:46.897 "nvme": [ 00:17:46.897 { 00:17:46.897 "pci_address": "0000:00:11.0", 00:17:46.897 "trid": { 00:17:46.897 "trtype": "PCIe", 00:17:46.897 "traddr": "0000:00:11.0" 00:17:46.897 }, 00:17:46.897 "ctrlr_data": { 00:17:46.897 "cntlid": 0, 00:17:46.897 "vendor_id": "0x1b36", 00:17:46.897 "model_number": "QEMU NVMe Ctrl", 00:17:46.897 "serial_number": "12341", 00:17:46.897 "firmware_revision": "8.0.0", 00:17:46.897 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:46.897 "oacs": { 00:17:46.897 "security": 0, 00:17:46.897 "format": 1, 00:17:46.897 "firmware": 0, 00:17:46.897 "ns_manage": 1 00:17:46.897 }, 00:17:46.897 "multi_ctrlr": false, 00:17:46.897 "ana_reporting": false 00:17:46.897 }, 00:17:46.897 "vs": { 00:17:46.897 "nvme_version": "1.4" 00:17:46.897 }, 00:17:46.897 "ns_data": { 00:17:46.897 "id": 1, 00:17:46.897 "can_share": false 00:17:46.897 } 00:17:46.897 } 00:17:46.897 ], 00:17:46.897 "mp_policy": "active_passive" 00:17:46.897 } 00:17:46.897 } 00:17:46.897 ]' 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:46.897 23:02:25 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:47.160 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:47.161 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:47.161 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=fd47ff06-4993-4493-a51a-f4ef96e67627 00:17:47.161 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fd47ff06-4993-4493-a51a-f4ef96e67627 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4a849864-ca57-496c-9672-668982fd896d 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4a849864-ca57-496c-9672-668982fd896d 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4a849864-ca57-496c-9672-668982fd896d 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4a849864-ca57-496c-9672-668982fd896d 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4a849864-ca57-496c-9672-668982fd896d 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:47.428 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a849864-ca57-496c-9672-668982fd896d 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:47.695 { 00:17:47.695 "name": "4a849864-ca57-496c-9672-668982fd896d", 00:17:47.695 "aliases": [ 00:17:47.695 "lvs/nvme0n1p0" 00:17:47.695 ], 00:17:47.695 "product_name": "Logical Volume", 00:17:47.695 "block_size": 4096, 00:17:47.695 "num_blocks": 26476544, 00:17:47.695 "uuid": "4a849864-ca57-496c-9672-668982fd896d", 00:17:47.695 "assigned_rate_limits": { 00:17:47.695 "rw_ios_per_sec": 0, 00:17:47.695 "rw_mbytes_per_sec": 0, 00:17:47.695 "r_mbytes_per_sec": 0, 00:17:47.695 "w_mbytes_per_sec": 0 00:17:47.695 }, 00:17:47.695 "claimed": false, 00:17:47.695 "zoned": false, 00:17:47.695 "supported_io_types": { 00:17:47.695 "read": true, 00:17:47.695 "write": true, 00:17:47.695 "unmap": true, 00:17:47.695 "flush": false, 00:17:47.695 "reset": true, 00:17:47.695 "nvme_admin": false, 00:17:47.695 "nvme_io": false, 00:17:47.695 "nvme_io_md": false, 00:17:47.695 "write_zeroes": true, 00:17:47.695 "zcopy": false, 00:17:47.695 "get_zone_info": false, 00:17:47.695 "zone_management": false, 00:17:47.695 "zone_append": false, 00:17:47.695 "compare": false, 00:17:47.695 "compare_and_write": false, 00:17:47.695 "abort": false, 00:17:47.695 "seek_hole": true, 00:17:47.695 "seek_data": true, 00:17:47.695 "copy": false, 00:17:47.695 "nvme_iov_md": false 00:17:47.695 }, 00:17:47.695 "driver_specific": { 00:17:47.695 "lvol": { 00:17:47.695 "lvol_store_uuid": "fd47ff06-4993-4493-a51a-f4ef96e67627", 00:17:47.695 "base_bdev": "nvme0n1", 00:17:47.695 "thin_provision": true, 00:17:47.695 "num_allocated_clusters": 0, 00:17:47.695 "snapshot": false, 00:17:47.695 "clone": false, 00:17:47.695 "esnap_clone": false 00:17:47.695 } 00:17:47.695 } 00:17:47.695 } 00:17:47.695 ]' 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:47.695 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:47.696 23:02:26 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4a849864-ca57-496c-9672-668982fd896d 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4a849864-ca57-496c-9672-668982fd896d 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:47.955 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a849864-ca57-496c-9672-668982fd896d 00:17:48.213 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:48.213 { 00:17:48.213 "name": "4a849864-ca57-496c-9672-668982fd896d", 00:17:48.213 "aliases": [ 00:17:48.213 "lvs/nvme0n1p0" 00:17:48.213 ], 00:17:48.213 "product_name": "Logical Volume", 00:17:48.213 "block_size": 4096, 00:17:48.213 "num_blocks": 26476544, 00:17:48.213 "uuid": "4a849864-ca57-496c-9672-668982fd896d", 00:17:48.213 "assigned_rate_limits": { 00:17:48.213 "rw_ios_per_sec": 0, 00:17:48.213 "rw_mbytes_per_sec": 0, 00:17:48.213 "r_mbytes_per_sec": 0, 00:17:48.213 "w_mbytes_per_sec": 0 00:17:48.213 }, 00:17:48.213 "claimed": false, 00:17:48.213 "zoned": false, 00:17:48.213 "supported_io_types": { 00:17:48.213 "read": true, 00:17:48.213 "write": true, 00:17:48.213 "unmap": true, 00:17:48.213 "flush": false, 00:17:48.214 "reset": true, 00:17:48.214 "nvme_admin": false, 00:17:48.214 "nvme_io": false, 00:17:48.214 "nvme_io_md": false, 00:17:48.214 "write_zeroes": true, 00:17:48.214 "zcopy": false, 00:17:48.214 "get_zone_info": false, 00:17:48.214 "zone_management": false, 00:17:48.214 "zone_append": false, 00:17:48.214 "compare": false, 00:17:48.214 "compare_and_write": false, 00:17:48.214 "abort": false, 00:17:48.214 "seek_hole": true, 00:17:48.214 "seek_data": true, 00:17:48.214 "copy": false, 00:17:48.214 "nvme_iov_md": false 00:17:48.214 }, 00:17:48.214 "driver_specific": { 00:17:48.214 "lvol": { 00:17:48.214 "lvol_store_uuid": "fd47ff06-4993-4493-a51a-f4ef96e67627", 00:17:48.214 "base_bdev": "nvme0n1", 00:17:48.214 "thin_provision": true, 00:17:48.214 "num_allocated_clusters": 0, 00:17:48.214 "snapshot": false, 00:17:48.214 "clone": false, 00:17:48.214 "esnap_clone": false 00:17:48.214 } 00:17:48.214 } 00:17:48.214 } 00:17:48.214 ]' 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:48.214 23:02:27 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:48.472 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4a849864-ca57-496c-9672-668982fd896d 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4a849864-ca57-496c-9672-668982fd896d 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:48.472 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a849864-ca57-496c-9672-668982fd896d 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:48.731 { 00:17:48.731 "name": "4a849864-ca57-496c-9672-668982fd896d", 00:17:48.731 "aliases": [ 00:17:48.731 "lvs/nvme0n1p0" 00:17:48.731 ], 00:17:48.731 "product_name": "Logical Volume", 00:17:48.731 "block_size": 4096, 00:17:48.731 "num_blocks": 26476544, 00:17:48.731 "uuid": "4a849864-ca57-496c-9672-668982fd896d", 00:17:48.731 "assigned_rate_limits": { 00:17:48.731 "rw_ios_per_sec": 0, 00:17:48.731 "rw_mbytes_per_sec": 0, 00:17:48.731 "r_mbytes_per_sec": 0, 00:17:48.731 "w_mbytes_per_sec": 0 00:17:48.731 }, 00:17:48.731 "claimed": false, 00:17:48.731 "zoned": false, 00:17:48.731 "supported_io_types": { 00:17:48.731 "read": true, 00:17:48.731 "write": true, 00:17:48.731 "unmap": true, 00:17:48.731 "flush": false, 00:17:48.731 "reset": true, 00:17:48.731 "nvme_admin": false, 00:17:48.731 "nvme_io": false, 00:17:48.731 "nvme_io_md": false, 00:17:48.731 "write_zeroes": true, 00:17:48.731 "zcopy": false, 00:17:48.731 "get_zone_info": false, 00:17:48.731 "zone_management": false, 00:17:48.731 "zone_append": false, 00:17:48.731 "compare": false, 00:17:48.731 "compare_and_write": false, 00:17:48.731 "abort": false, 00:17:48.731 "seek_hole": true, 00:17:48.731 "seek_data": true, 00:17:48.731 "copy": false, 00:17:48.731 "nvme_iov_md": false 00:17:48.731 }, 00:17:48.731 "driver_specific": { 00:17:48.731 "lvol": { 00:17:48.731 "lvol_store_uuid": "fd47ff06-4993-4493-a51a-f4ef96e67627", 00:17:48.731 "base_bdev": "nvme0n1", 00:17:48.731 "thin_provision": true, 00:17:48.731 "num_allocated_clusters": 0, 00:17:48.731 "snapshot": false, 00:17:48.731 "clone": false, 00:17:48.731 "esnap_clone": false 00:17:48.731 } 00:17:48.731 } 00:17:48.731 } 00:17:48.731 ]' 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:48.731 23:02:27 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4a849864-ca57-496c-9672-668982fd896d -c nvc0n1p0 --l2p_dram_limit 60 00:17:48.991 [2024-11-26 23:02:27.891710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.891751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:48.991 [2024-11-26 23:02:27.891765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:48.991 [2024-11-26 23:02:27.891772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.891839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.891847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.991 [2024-11-26 23:02:27.891858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:48.991 [2024-11-26 23:02:27.891864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.891898] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:48.991 [2024-11-26 23:02:27.892112] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:48.991 [2024-11-26 23:02:27.892126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.892133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.991 [2024-11-26 23:02:27.892141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:48.991 [2024-11-26 23:02:27.892146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.892185] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a4be9e11-6e44-4a1a-b21a-2df9d640ba34 00:17:48.991 [2024-11-26 23:02:27.893497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.893527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:48.991 [2024-11-26 23:02:27.893535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:48.991 [2024-11-26 23:02:27.893542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.900286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.900329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.991 [2024-11-26 23:02:27.900337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.660 ms 00:17:48.991 [2024-11-26 23:02:27.900349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.900422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.900432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.991 [2024-11-26 23:02:27.900439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:48.991 [2024-11-26 23:02:27.900447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.900490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.900514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:48.991 [2024-11-26 23:02:27.900522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:48.991 [2024-11-26 23:02:27.900532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.900567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.991 [2024-11-26 23:02:27.902167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.902194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.991 [2024-11-26 23:02:27.902204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:17:48.991 [2024-11-26 23:02:27.902211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.902250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.902258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:48.991 [2024-11-26 23:02:27.902268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:48.991 [2024-11-26 23:02:27.902277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.902325] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:48.991 [2024-11-26 23:02:27.902453] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:48.991 [2024-11-26 23:02:27.902465] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:48.991 [2024-11-26 23:02:27.902476] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:48.991 [2024-11-26 23:02:27.902494] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:48.991 [2024-11-26 23:02:27.902500] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:48.991 [2024-11-26 23:02:27.902509] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:48.991 [2024-11-26 23:02:27.902515] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:48.991 [2024-11-26 23:02:27.902521] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:48.991 [2024-11-26 23:02:27.902535] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:48.991 [2024-11-26 23:02:27.902542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.902548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:48.991 [2024-11-26 23:02:27.902555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:17:48.991 [2024-11-26 23:02:27.902561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.902641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.991 [2024-11-26 23:02:27.902648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:48.991 [2024-11-26 23:02:27.902655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:48.991 [2024-11-26 23:02:27.902660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.991 [2024-11-26 23:02:27.902753] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:48.991 [2024-11-26 23:02:27.902760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:48.991 [2024-11-26 23:02:27.902768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.991 [2024-11-26 23:02:27.902774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.991 [2024-11-26 23:02:27.902782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:48.991 [2024-11-26 23:02:27.902787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:48.991 [2024-11-26 23:02:27.902793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:48.991 [2024-11-26 23:02:27.902798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:48.991 [2024-11-26 23:02:27.902804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:48.991 [2024-11-26 23:02:27.902809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.991 [2024-11-26 23:02:27.902815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:48.992 [2024-11-26 23:02:27.902820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:48.992 [2024-11-26 23:02:27.902829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.992 [2024-11-26 23:02:27.902834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:48.992 [2024-11-26 23:02:27.902840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:48.992 [2024-11-26 23:02:27.902845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.992 [2024-11-26 23:02:27.902851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:48.992 [2024-11-26 23:02:27.902868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:48.992 [2024-11-26 23:02:27.902874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.992 [2024-11-26 23:02:27.902879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:48.992 [2024-11-26 23:02:27.902885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:48.992 [2024-11-26 23:02:27.902890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.992 [2024-11-26 23:02:27.902896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:48.992 [2024-11-26 23:02:27.902901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:48.992 [2024-11-26 23:02:27.902907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.992 [2024-11-26 23:02:27.902917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:48.992 [2024-11-26 23:02:27.902923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:48.992 [2024-11-26 23:02:27.902928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.992 [2024-11-26 23:02:27.902936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:48.992 [2024-11-26 23:02:27.902941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:48.992 [2024-11-26 23:02:27.902948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.992 [2024-11-26 23:02:27.902953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:48.992 [2024-11-26 23:02:27.902960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:48.992 [2024-11-26 23:02:27.902964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.992 [2024-11-26 23:02:27.902970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:48.992 [2024-11-26 23:02:27.902975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:48.992 [2024-11-26 23:02:27.902982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.992 [2024-11-26 23:02:27.902987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:48.992 [2024-11-26 23:02:27.902993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:48.992 [2024-11-26 23:02:27.902998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.992 [2024-11-26 23:02:27.903004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:48.992 [2024-11-26 23:02:27.903009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:48.992 [2024-11-26 23:02:27.903015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.992 [2024-11-26 23:02:27.903020] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:48.992 [2024-11-26 23:02:27.903038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:48.992 [2024-11-26 23:02:27.903044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.992 [2024-11-26 23:02:27.903051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.992 [2024-11-26 23:02:27.903057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:48.992 [2024-11-26 23:02:27.903063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:48.992 [2024-11-26 23:02:27.903067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:48.992 [2024-11-26 23:02:27.903074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:48.992 [2024-11-26 23:02:27.903079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:48.992 [2024-11-26 23:02:27.903085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:48.992 [2024-11-26 23:02:27.903093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:48.992 [2024-11-26 23:02:27.903102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.992 [2024-11-26 23:02:27.903108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:48.992 [2024-11-26 23:02:27.903115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:48.992 [2024-11-26 23:02:27.903123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:48.992 [2024-11-26 23:02:27.903130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:48.992 [2024-11-26 23:02:27.903136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:48.992 [2024-11-26 23:02:27.903144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:48.992 [2024-11-26 23:02:27.903149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:48.992 [2024-11-26 23:02:27.903156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:48.992 [2024-11-26 23:02:27.903161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:48.992 [2024-11-26 23:02:27.903167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:48.992 [2024-11-26 23:02:27.903173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:48.992 [2024-11-26 23:02:27.903179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:48.992 [2024-11-26 23:02:27.903184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:48.992 [2024-11-26 23:02:27.903191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:48.992 [2024-11-26 23:02:27.903196] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:48.992 [2024-11-26 23:02:27.903204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.992 [2024-11-26 23:02:27.903210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:48.992 [2024-11-26 23:02:27.903217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:48.992 [2024-11-26 23:02:27.903222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:48.992 [2024-11-26 23:02:27.903229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:48.992 [2024-11-26 23:02:27.903235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.992 [2024-11-26 23:02:27.903244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:48.992 [2024-11-26 23:02:27.903250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:17:48.992 [2024-11-26 23:02:27.903257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.992 [2024-11-26 23:02:27.903329] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:48.992 [2024-11-26 23:02:27.903340] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:52.283 [2024-11-26 23:02:30.816816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.816892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:52.283 [2024-11-26 23:02:30.816908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2913.475 ms 00:17:52.283 [2024-11-26 23:02:30.816918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.827741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.827931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.283 [2024-11-26 23:02:30.827950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.729 ms 00:17:52.283 [2024-11-26 23:02:30.827975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.828108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.828132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:52.283 [2024-11-26 23:02:30.828151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:52.283 [2024-11-26 23:02:30.828161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.847144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.847188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.283 [2024-11-26 23:02:30.847200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.935 ms 00:17:52.283 [2024-11-26 23:02:30.847211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.847250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.847262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.283 [2024-11-26 23:02:30.847271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.283 [2024-11-26 23:02:30.847281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.847776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.847827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.283 [2024-11-26 23:02:30.847837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:52.283 [2024-11-26 23:02:30.847849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.847970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.847983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.283 [2024-11-26 23:02:30.847992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:52.283 [2024-11-26 23:02:30.848001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.854969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.855004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.283 [2024-11-26 23:02:30.855017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:17:52.283 [2024-11-26 23:02:30.855027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.864405] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:52.283 [2024-11-26 23:02:30.881441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.881591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:52.283 [2024-11-26 23:02:30.881610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.331 ms 00:17:52.283 [2024-11-26 23:02:30.881621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.927554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.927600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:52.283 [2024-11-26 23:02:30.927617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.894 ms 00:17:52.283 [2024-11-26 23:02:30.927625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.927831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.927842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:52.283 [2024-11-26 23:02:30.927853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:52.283 [2024-11-26 23:02:30.927861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.930910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.930944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:52.283 [2024-11-26 23:02:30.930956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:17:52.283 [2024-11-26 23:02:30.930964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.933837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.933866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:52.283 [2024-11-26 23:02:30.933878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:17:52.283 [2024-11-26 23:02:30.933885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.934198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.934218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:52.283 [2024-11-26 23:02:30.934231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:52.283 [2024-11-26 23:02:30.934239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.960011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.960139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:52.283 [2024-11-26 23:02:30.960159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.741 ms 00:17:52.283 [2024-11-26 23:02:30.960168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.964280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.964322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:52.283 [2024-11-26 23:02:30.964334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.042 ms 00:17:52.283 [2024-11-26 23:02:30.964342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.283 [2024-11-26 23:02:30.967842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.283 [2024-11-26 23:02:30.967869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:52.283 [2024-11-26 23:02:30.967880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:17:52.284 [2024-11-26 23:02:30.967887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.284 [2024-11-26 23:02:30.972214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.284 [2024-11-26 23:02:30.972244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:52.284 [2024-11-26 23:02:30.972257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:17:52.284 [2024-11-26 23:02:30.972264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.284 [2024-11-26 23:02:30.972346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.284 [2024-11-26 23:02:30.972357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:52.284 [2024-11-26 23:02:30.972377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:52.284 [2024-11-26 23:02:30.972385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.284 [2024-11-26 23:02:30.972468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.284 [2024-11-26 23:02:30.972478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:52.284 [2024-11-26 23:02:30.972487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:52.284 [2024-11-26 23:02:30.972496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.284 [2024-11-26 23:02:30.973551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3081.337 ms, result 0 00:17:52.284 { 00:17:52.284 "name": "ftl0", 00:17:52.284 "uuid": "a4be9e11-6e44-4a1a-b21a-2df9d640ba34" 00:17:52.284 } 00:17:52.284 23:02:30 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:52.284 23:02:30 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:52.284 23:02:30 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.284 23:02:30 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:52.284 23:02:30 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.284 23:02:30 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.284 23:02:30 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:52.284 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:52.284 [ 00:17:52.284 { 00:17:52.284 "name": "ftl0", 00:17:52.284 "aliases": [ 00:17:52.284 "a4be9e11-6e44-4a1a-b21a-2df9d640ba34" 00:17:52.284 ], 00:17:52.284 "product_name": "FTL disk", 00:17:52.284 "block_size": 4096, 00:17:52.284 "num_blocks": 20971520, 00:17:52.284 "uuid": "a4be9e11-6e44-4a1a-b21a-2df9d640ba34", 00:17:52.284 "assigned_rate_limits": { 00:17:52.284 "rw_ios_per_sec": 0, 00:17:52.284 "rw_mbytes_per_sec": 0, 00:17:52.284 "r_mbytes_per_sec": 0, 00:17:52.284 "w_mbytes_per_sec": 0 00:17:52.284 }, 00:17:52.284 "claimed": false, 00:17:52.284 "zoned": false, 00:17:52.284 "supported_io_types": { 00:17:52.284 "read": true, 00:17:52.284 "write": true, 00:17:52.284 "unmap": true, 00:17:52.284 "flush": true, 00:17:52.284 "reset": false, 00:17:52.284 "nvme_admin": false, 00:17:52.284 "nvme_io": false, 00:17:52.284 "nvme_io_md": false, 00:17:52.284 "write_zeroes": true, 00:17:52.284 "zcopy": false, 00:17:52.284 "get_zone_info": false, 00:17:52.284 "zone_management": false, 00:17:52.284 "zone_append": false, 00:17:52.284 "compare": false, 00:17:52.284 "compare_and_write": false, 00:17:52.284 "abort": false, 00:17:52.284 "seek_hole": false, 00:17:52.284 "seek_data": false, 00:17:52.284 "copy": false, 00:17:52.284 "nvme_iov_md": false 00:17:52.284 }, 00:17:52.284 "driver_specific": { 00:17:52.284 "ftl": { 00:17:52.284 "base_bdev": "4a849864-ca57-496c-9672-668982fd896d", 00:17:52.284 "cache": "nvc0n1p0" 00:17:52.284 } 00:17:52.284 } 00:17:52.284 } 00:17:52.284 ] 00:17:52.284 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:52.284 23:02:31 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:52.284 23:02:31 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:52.546 23:02:31 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:52.546 23:02:31 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:52.807 [2024-11-26 23:02:31.812980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.813015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.807 [2024-11-26 23:02:31.813024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.807 [2024-11-26 23:02:31.813032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.813059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.807 [2024-11-26 23:02:31.813617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.813643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.807 [2024-11-26 23:02:31.813655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:17:52.807 [2024-11-26 23:02:31.813661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.814047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.814059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.807 [2024-11-26 23:02:31.814080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:52.807 [2024-11-26 23:02:31.814086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.816505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.816521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.807 [2024-11-26 23:02:31.816532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.397 ms 00:17:52.807 [2024-11-26 23:02:31.816539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.821180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.821201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.807 [2024-11-26 23:02:31.821211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.619 ms 00:17:52.807 [2024-11-26 23:02:31.821217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.822655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.822778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.807 [2024-11-26 23:02:31.822793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:17:52.807 [2024-11-26 23:02:31.822799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.826958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.826987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.807 [2024-11-26 23:02:31.826997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.121 ms 00:17:52.807 [2024-11-26 23:02:31.827004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.827135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.827143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.807 [2024-11-26 23:02:31.827152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:52.807 [2024-11-26 23:02:31.827157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.828543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.828567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.807 [2024-11-26 23:02:31.828576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:17:52.807 [2024-11-26 23:02:31.828581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.829618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.829712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.807 [2024-11-26 23:02:31.829726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:17:52.807 [2024-11-26 23:02:31.829732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.830563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.830583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.807 [2024-11-26 23:02:31.830591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:17:52.807 [2024-11-26 23:02:31.830596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.831427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.807 [2024-11-26 23:02:31.831451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.807 [2024-11-26 23:02:31.831459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:17:52.807 [2024-11-26 23:02:31.831464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.807 [2024-11-26 23:02:31.831498] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.807 [2024-11-26 23:02:31.831510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.807 [2024-11-26 23:02:31.831677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.831999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.808 [2024-11-26 23:02:31.832192] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.808 [2024-11-26 23:02:31.832200] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4be9e11-6e44-4a1a-b21a-2df9d640ba34 00:17:52.808 [2024-11-26 23:02:31.832206] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.808 [2024-11-26 23:02:31.832222] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.808 [2024-11-26 23:02:31.832229] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.808 [2024-11-26 23:02:31.832237] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.808 [2024-11-26 23:02:31.832242] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.808 [2024-11-26 23:02:31.832250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.808 [2024-11-26 23:02:31.832264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.808 [2024-11-26 23:02:31.832270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.808 [2024-11-26 23:02:31.832275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.808 [2024-11-26 23:02:31.832282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.808 [2024-11-26 23:02:31.832304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.808 [2024-11-26 23:02:31.832312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:17:52.808 [2024-11-26 23:02:31.832319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.808 [2024-11-26 23:02:31.834227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.808 [2024-11-26 23:02:31.834316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.808 [2024-11-26 23:02:31.834366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:17:52.808 [2024-11-26 23:02:31.834394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.808 [2024-11-26 23:02:31.834504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.809 [2024-11-26 23:02:31.834523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.809 [2024-11-26 23:02:31.834571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:52.809 [2024-11-26 23:02:31.834588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.840577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.840674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.809 [2024-11-26 23:02:31.840734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.840753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.840816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.840889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.809 [2024-11-26 23:02:31.840913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.840927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.841000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.841092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.809 [2024-11-26 23:02:31.841113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.841129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.841164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.841211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.809 [2024-11-26 23:02:31.841231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.841247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.852340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.852453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.809 [2024-11-26 23:02:31.852507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.852543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.861424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.861538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.809 [2024-11-26 23:02:31.861620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.861639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.861749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.861828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.809 [2024-11-26 23:02:31.861849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.861864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.861924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.861942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.809 [2024-11-26 23:02:31.862005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.862022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.862116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.862162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.809 [2024-11-26 23:02:31.862227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.862245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.862316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.862415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.809 [2024-11-26 23:02:31.862436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.862451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.862510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.862565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.809 [2024-11-26 23:02:31.862586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.862601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.862671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.809 [2024-11-26 23:02:31.862744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.809 [2024-11-26 23:02:31.862762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.809 [2024-11-26 23:02:31.862777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.809 [2024-11-26 23:02:31.862984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.963 ms, result 0 00:17:52.809 true 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87966 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 87966 ']' 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 87966 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87966 00:17:52.809 killing process with pid 87966 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87966' 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 87966 00:17:52.809 23:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 87966 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:56.185 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:56.186 23:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:56.446 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:56.446 fio-3.35 00:17:56.446 Starting 1 thread 00:18:01.741 00:18:01.741 test: (groupid=0, jobs=1): err= 0: pid=88136: Tue Nov 26 23:02:40 2024 00:18:01.741 read: IOPS=903, BW=60.0MiB/s (62.9MB/s)(255MiB/4243msec) 00:18:01.741 slat (nsec): min=3913, max=21977, avg=5309.27, stdev=1834.48 00:18:01.741 clat (usec): min=269, max=1601, avg=500.96, stdev=128.28 00:18:01.741 lat (usec): min=274, max=1607, avg=506.26, stdev=128.38 00:18:01.741 clat percentiles (usec): 00:18:01.741 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 375], 20.00th=[ 396], 00:18:01.741 | 30.00th=[ 441], 40.00th=[ 457], 50.00th=[ 498], 60.00th=[ 515], 00:18:01.741 | 70.00th=[ 529], 80.00th=[ 562], 90.00th=[ 660], 95.00th=[ 799], 00:18:01.741 | 99.00th=[ 865], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1582], 00:18:01.741 | 99.99th=[ 1598] 00:18:01.741 write: IOPS=910, BW=60.4MiB/s (63.4MB/s)(256MiB/4236msec); 0 zone resets 00:18:01.741 slat (usec): min=14, max=126, avg=19.28, stdev= 3.65 00:18:01.741 clat (usec): min=292, max=1649, avg=565.65, stdev=142.60 00:18:01.741 lat (usec): min=317, max=1669, avg=584.93, stdev=142.72 00:18:01.741 clat percentiles (usec): 00:18:01.741 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 424], 20.00th=[ 469], 00:18:01.741 | 30.00th=[ 482], 40.00th=[ 498], 50.00th=[ 545], 60.00th=[ 578], 00:18:01.741 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 791], 95.00th=[ 881], 00:18:01.741 | 99.00th=[ 979], 99.50th=[ 1045], 99.90th=[ 1221], 99.95th=[ 1516], 00:18:01.741 | 99.99th=[ 1647] 00:18:01.741 bw ( KiB/s): min=45696, max=69768, per=100.00%, avg=62849.00, stdev=7720.90, samples=8 00:18:01.741 iops : min= 672, max= 1026, avg=924.25, stdev=113.54, samples=8 00:18:01.741 lat (usec) : 500=45.94%, 750=44.65%, 1000=9.01% 00:18:01.741 lat (msec) : 2=0.40% 00:18:01.741 cpu : usr=99.32%, sys=0.09%, ctx=9, majf=0, minf=1181 00:18:01.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.741 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.741 00:18:01.741 Run status group 0 (all jobs): 00:18:01.741 READ: bw=60.0MiB/s (62.9MB/s), 60.0MiB/s-60.0MiB/s (62.9MB/s-62.9MB/s), io=255MiB (267MB), run=4243-4243msec 00:18:01.741 WRITE: bw=60.4MiB/s (63.4MB/s), 60.4MiB/s-60.4MiB/s (63.4MB/s-63.4MB/s), io=256MiB (269MB), run=4236-4236msec 00:18:02.312 ----------------------------------------------------- 00:18:02.312 Suppressions used: 00:18:02.312 count bytes template 00:18:02.312 1 5 /usr/src/fio/parse.c 00:18:02.312 1 8 libtcmalloc_minimal.so 00:18:02.312 1 904 libcrypto.so 00:18:02.312 ----------------------------------------------------- 00:18:02.312 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:02.312 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.313 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:02.313 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:02.313 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:02.313 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:02.313 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:02.313 23:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:02.574 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:02.574 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:02.574 fio-3.35 00:18:02.574 Starting 2 threads 00:18:34.722 00:18:34.722 first_half: (groupid=0, jobs=1): err= 0: pid=88239: Tue Nov 26 23:03:11 2024 00:18:34.722 read: IOPS=2270, BW=9081KiB/s (9299kB/s)(255MiB/28767msec) 00:18:34.722 slat (nsec): min=3064, max=67921, avg=5166.66, stdev=1598.05 00:18:34.722 clat (usec): min=728, max=451677, avg=43386.64, stdev=26336.88 00:18:34.722 lat (usec): min=733, max=451682, avg=43391.81, stdev=26336.93 00:18:34.722 clat percentiles (msec): 00:18:34.722 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 33], 00:18:34.723 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 40], 00:18:34.723 | 70.00th=[ 42], 80.00th=[ 45], 90.00th=[ 56], 95.00th=[ 81], 00:18:34.723 | 99.00th=[ 176], 99.50th=[ 205], 99.90th=[ 284], 99.95th=[ 368], 00:18:34.723 | 99.99th=[ 435] 00:18:34.723 write: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(256MiB/24245msec); 0 zone resets 00:18:34.723 slat (usec): min=3, max=2508, avg= 6.74, stdev=13.03 00:18:34.723 clat (usec): min=496, max=114141, avg=12925.70, stdev=19301.33 00:18:34.723 lat (usec): min=502, max=114147, avg=12932.44, stdev=19301.42 00:18:34.723 clat percentiles (usec): 00:18:34.723 | 1.00th=[ 1012], 5.00th=[ 1369], 10.00th=[ 1614], 20.00th=[ 2212], 00:18:34.723 | 30.00th=[ 4293], 40.00th=[ 5932], 50.00th=[ 7963], 60.00th=[ 9503], 00:18:34.723 | 70.00th=[ 11600], 80.00th=[ 14484], 90.00th=[ 19792], 95.00th=[ 55313], 00:18:34.723 | 99.00th=[ 95945], 99.50th=[101188], 99.90th=[108528], 99.95th=[110625], 00:18:34.723 | 99.99th=[112722] 00:18:34.723 bw ( KiB/s): min= 2320, max=34920, per=98.34%, avg=19418.07, stdev=8245.06, samples=27 00:18:34.723 iops : min= 580, max= 8730, avg=4854.52, stdev=2061.27, samples=27 00:18:34.723 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.42% 00:18:34.723 lat (msec) : 2=8.43%, 4=5.51%, 10=17.25%, 20=14.03%, 50=44.69% 00:18:34.723 lat (msec) : 100=7.73%, 250=1.77%, 500=0.11% 00:18:34.723 cpu : usr=99.34%, sys=0.12%, ctx=113, majf=0, minf=5609 00:18:34.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:34.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.723 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.723 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.723 second_half: (groupid=0, jobs=1): err= 0: pid=88240: Tue Nov 26 23:03:11 2024 00:18:34.723 read: IOPS=2248, BW=8995KiB/s (9211kB/s)(255MiB/29044msec) 00:18:34.723 slat (nsec): min=2993, max=37753, avg=5180.76, stdev=1672.54 00:18:34.723 clat (usec): min=1044, max=452411, avg=43094.21, stdev=32498.95 00:18:34.723 lat (usec): min=1048, max=452420, avg=43099.39, stdev=32499.06 00:18:34.723 clat percentiles (msec): 00:18:34.723 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:18:34.723 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 39], 00:18:34.723 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 52], 95.00th=[ 68], 00:18:34.723 | 99.00th=[ 224], 99.50th=[ 259], 99.90th=[ 313], 99.95th=[ 338], 00:18:34.723 | 99.99th=[ 447] 00:18:34.723 write: IOPS=2468, BW=9873KiB/s (10.1MB/s)(256MiB/26551msec); 0 zone resets 00:18:34.723 slat (usec): min=3, max=528, avg= 6.63, stdev= 5.72 00:18:34.723 clat (usec): min=483, max=114430, avg=13761.17, stdev=21616.16 00:18:34.723 lat (usec): min=489, max=114436, avg=13767.81, stdev=21616.23 00:18:34.723 clat percentiles (usec): 00:18:34.723 | 1.00th=[ 1004], 5.00th=[ 1369], 10.00th=[ 1631], 20.00th=[ 2057], 00:18:34.723 | 30.00th=[ 2802], 40.00th=[ 4080], 50.00th=[ 5735], 60.00th=[ 7308], 00:18:34.723 | 70.00th=[ 9765], 80.00th=[ 16581], 90.00th=[ 41681], 95.00th=[ 68682], 00:18:34.723 | 99.00th=[ 96994], 99.50th=[101188], 99.90th=[109577], 99.95th=[111674], 00:18:34.723 | 99.99th=[113771] 00:18:34.723 bw ( KiB/s): min= 160, max=48504, per=85.66%, avg=16914.23, stdev=14194.96, samples=31 00:18:34.723 iops : min= 40, max=12126, avg=4228.55, stdev=3548.73, samples=31 00:18:34.723 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.45% 00:18:34.723 lat (msec) : 2=8.94%, 4=10.30%, 10=15.71%, 20=9.29%, 50=45.87% 00:18:34.723 lat (msec) : 100=7.20%, 250=1.91%, 500=0.28% 00:18:34.723 cpu : usr=99.25%, sys=0.08%, ctx=44, majf=0, minf=5527 00:18:34.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:34.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.723 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.723 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.723 00:18:34.723 Run status group 0 (all jobs): 00:18:34.723 READ: bw=17.6MiB/s (18.4MB/s), 8995KiB/s-9081KiB/s (9211kB/s-9299kB/s), io=510MiB (535MB), run=28767-29044msec 00:18:34.723 WRITE: bw=19.3MiB/s (20.2MB/s), 9873KiB/s-10.6MiB/s (10.1MB/s-11.1MB/s), io=512MiB (537MB), run=24245-26551msec 00:18:34.723 ----------------------------------------------------- 00:18:34.723 Suppressions used: 00:18:34.723 count bytes template 00:18:34.723 2 10 /usr/src/fio/parse.c 00:18:34.723 4 384 /usr/src/fio/iolog.c 00:18:34.723 1 8 libtcmalloc_minimal.so 00:18:34.723 1 904 libcrypto.so 00:18:34.723 ----------------------------------------------------- 00:18:34.723 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:34.723 23:03:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.723 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:34.723 fio-3.35 00:18:34.723 Starting 1 thread 00:18:52.823 00:18:52.823 test: (groupid=0, jobs=1): err= 0: pid=88597: Tue Nov 26 23:03:29 2024 00:18:52.823 read: IOPS=7849, BW=30.7MiB/s (32.1MB/s)(255MiB/8307msec) 00:18:52.823 slat (nsec): min=3092, max=22396, avg=4469.24, stdev=1132.27 00:18:52.823 clat (usec): min=577, max=32304, avg=16299.50, stdev=1921.79 00:18:52.823 lat (usec): min=582, max=32308, avg=16303.97, stdev=1921.80 00:18:52.823 clat percentiles (usec): 00:18:52.823 | 1.00th=[14746], 5.00th=[15139], 10.00th=[15270], 20.00th=[15401], 00:18:52.823 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[15926], 00:18:52.823 | 70.00th=[16057], 80.00th=[16319], 90.00th=[17695], 95.00th=[20317], 00:18:52.823 | 99.00th=[25560], 99.50th=[27132], 99.90th=[29492], 99.95th=[30016], 00:18:52.823 | 99.99th=[31589] 00:18:52.823 write: IOPS=8795, BW=34.4MiB/s (36.0MB/s)(256MiB/7451msec); 0 zone resets 00:18:52.823 slat (usec): min=4, max=523, avg= 8.51, stdev= 6.14 00:18:52.823 clat (usec): min=540, max=73628, avg=14482.64, stdev=16304.16 00:18:52.823 lat (usec): min=545, max=73636, avg=14491.15, stdev=16304.29 00:18:52.823 clat percentiles (usec): 00:18:52.823 | 1.00th=[ 914], 5.00th=[ 1205], 10.00th=[ 1434], 20.00th=[ 1795], 00:18:52.823 | 30.00th=[ 2057], 40.00th=[ 3064], 50.00th=[10814], 60.00th=[13304], 00:18:52.823 | 70.00th=[15664], 80.00th=[18744], 90.00th=[47449], 95.00th=[52167], 00:18:52.823 | 99.00th=[58983], 99.50th=[61604], 99.90th=[66323], 99.95th=[66847], 00:18:52.823 | 99.99th=[68682] 00:18:52.823 bw ( KiB/s): min=27856, max=43888, per=99.35%, avg=34952.53, stdev=5163.73, samples=15 00:18:52.823 iops : min= 6964, max=10972, avg=8738.13, stdev=1290.93, samples=15 00:18:52.823 lat (usec) : 750=0.17%, 1000=0.67% 00:18:52.823 lat (msec) : 2=13.24%, 4=6.61%, 10=2.86%, 20=64.76%, 50=7.93% 00:18:52.823 lat (msec) : 100=3.76% 00:18:52.823 cpu : usr=98.86%, sys=0.30%, ctx=24, majf=0, minf=5577 00:18:52.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:52.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.823 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.823 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.823 00:18:52.823 Run status group 0 (all jobs): 00:18:52.823 READ: bw=30.7MiB/s (32.1MB/s), 30.7MiB/s-30.7MiB/s (32.1MB/s-32.1MB/s), io=255MiB (267MB), run=8307-8307msec 00:18:52.823 WRITE: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=256MiB (268MB), run=7451-7451msec 00:18:52.823 ----------------------------------------------------- 00:18:52.823 Suppressions used: 00:18:52.823 count bytes template 00:18:52.823 1 5 /usr/src/fio/parse.c 00:18:52.823 2 192 /usr/src/fio/iolog.c 00:18:52.823 1 8 libtcmalloc_minimal.so 00:18:52.824 1 904 libcrypto.so 00:18:52.824 ----------------------------------------------------- 00:18:52.824 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:52.824 Remove shared memory files 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70944 /dev/shm/spdk_tgt_trace.pid86910 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:52.824 ************************************ 00:18:52.824 END TEST ftl_fio_basic 00:18:52.824 ************************************ 00:18:52.824 00:18:52.824 real 1m5.966s 00:18:52.824 user 2m29.479s 00:18:52.824 sys 0m3.235s 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.824 23:03:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:52.824 23:03:30 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:52.824 23:03:30 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:52.824 23:03:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.824 23:03:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:52.824 ************************************ 00:18:52.824 START TEST ftl_bdevperf 00:18:52.824 ************************************ 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:52.824 * Looking for test storage... 00:18:52.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:52.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.824 --rc genhtml_branch_coverage=1 00:18:52.824 --rc genhtml_function_coverage=1 00:18:52.824 --rc genhtml_legend=1 00:18:52.824 --rc geninfo_all_blocks=1 00:18:52.824 --rc geninfo_unexecuted_blocks=1 00:18:52.824 00:18:52.824 ' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:52.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.824 --rc genhtml_branch_coverage=1 00:18:52.824 --rc genhtml_function_coverage=1 00:18:52.824 --rc genhtml_legend=1 00:18:52.824 --rc geninfo_all_blocks=1 00:18:52.824 --rc geninfo_unexecuted_blocks=1 00:18:52.824 00:18:52.824 ' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:52.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.824 --rc genhtml_branch_coverage=1 00:18:52.824 --rc genhtml_function_coverage=1 00:18:52.824 --rc genhtml_legend=1 00:18:52.824 --rc geninfo_all_blocks=1 00:18:52.824 --rc geninfo_unexecuted_blocks=1 00:18:52.824 00:18:52.824 ' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:52.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.824 --rc genhtml_branch_coverage=1 00:18:52.824 --rc genhtml_function_coverage=1 00:18:52.824 --rc genhtml_legend=1 00:18:52.824 --rc geninfo_all_blocks=1 00:18:52.824 --rc geninfo_unexecuted_blocks=1 00:18:52.824 00:18:52.824 ' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:52.824 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88852 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88852 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88852 ']' 00:18:52.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.825 23:03:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:52.825 [2024-11-26 23:03:30.576785] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:18:52.825 [2024-11-26 23:03:30.576915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88852 ] 00:18:52.825 [2024-11-26 23:03:30.709621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:52.825 [2024-11-26 23:03:30.739906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.825 [2024-11-26 23:03:30.766565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:52.825 { 00:18:52.825 "name": "nvme0n1", 00:18:52.825 "aliases": [ 00:18:52.825 "977f33cc-d632-45c1-af7c-2f8ed649581b" 00:18:52.825 ], 00:18:52.825 "product_name": "NVMe disk", 00:18:52.825 "block_size": 4096, 00:18:52.825 "num_blocks": 1310720, 00:18:52.825 "uuid": "977f33cc-d632-45c1-af7c-2f8ed649581b", 00:18:52.825 "numa_id": -1, 00:18:52.825 "assigned_rate_limits": { 00:18:52.825 "rw_ios_per_sec": 0, 00:18:52.825 "rw_mbytes_per_sec": 0, 00:18:52.825 "r_mbytes_per_sec": 0, 00:18:52.825 "w_mbytes_per_sec": 0 00:18:52.825 }, 00:18:52.825 "claimed": true, 00:18:52.825 "claim_type": "read_many_write_one", 00:18:52.825 "zoned": false, 00:18:52.825 "supported_io_types": { 00:18:52.825 "read": true, 00:18:52.825 "write": true, 00:18:52.825 "unmap": true, 00:18:52.825 "flush": true, 00:18:52.825 "reset": true, 00:18:52.825 "nvme_admin": true, 00:18:52.825 "nvme_io": true, 00:18:52.825 "nvme_io_md": false, 00:18:52.825 "write_zeroes": true, 00:18:52.825 "zcopy": false, 00:18:52.825 "get_zone_info": false, 00:18:52.825 "zone_management": false, 00:18:52.825 "zone_append": false, 00:18:52.825 "compare": true, 00:18:52.825 "compare_and_write": false, 00:18:52.825 "abort": true, 00:18:52.825 "seek_hole": false, 00:18:52.825 "seek_data": false, 00:18:52.825 "copy": true, 00:18:52.825 "nvme_iov_md": false 00:18:52.825 }, 00:18:52.825 "driver_specific": { 00:18:52.825 "nvme": [ 00:18:52.825 { 00:18:52.825 "pci_address": "0000:00:11.0", 00:18:52.825 "trid": { 00:18:52.825 "trtype": "PCIe", 00:18:52.825 "traddr": "0000:00:11.0" 00:18:52.825 }, 00:18:52.825 "ctrlr_data": { 00:18:52.825 "cntlid": 0, 00:18:52.825 "vendor_id": "0x1b36", 00:18:52.825 "model_number": "QEMU NVMe Ctrl", 00:18:52.825 "serial_number": "12341", 00:18:52.825 "firmware_revision": "8.0.0", 00:18:52.825 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:52.825 "oacs": { 00:18:52.825 "security": 0, 00:18:52.825 "format": 1, 00:18:52.825 "firmware": 0, 00:18:52.825 "ns_manage": 1 00:18:52.825 }, 00:18:52.825 "multi_ctrlr": false, 00:18:52.825 "ana_reporting": false 00:18:52.825 }, 00:18:52.825 "vs": { 00:18:52.825 "nvme_version": "1.4" 00:18:52.825 }, 00:18:52.825 "ns_data": { 00:18:52.825 "id": 1, 00:18:52.825 "can_share": false 00:18:52.825 } 00:18:52.825 } 00:18:52.825 ], 00:18:52.825 "mp_policy": "active_passive" 00:18:52.825 } 00:18:52.825 } 00:18:52.825 ]' 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:52.825 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:53.087 23:03:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:53.087 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=fd47ff06-4993-4493-a51a-f4ef96e67627 00:18:53.087 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:53.087 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd47ff06-4993-4493-a51a-f4ef96e67627 00:18:53.348 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:53.608 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=958e7d43-9f1a-48a0-ab03-e6ae2a648144 00:18:53.608 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 958e7d43-9f1a-48a0-ab03-e6ae2a648144 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:53.870 23:03:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:54.128 23:03:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:54.128 { 00:18:54.128 "name": "2f198d83-e0fa-4088-8f8c-60d554b8a77a", 00:18:54.128 "aliases": [ 00:18:54.128 "lvs/nvme0n1p0" 00:18:54.128 ], 00:18:54.128 "product_name": "Logical Volume", 00:18:54.128 "block_size": 4096, 00:18:54.128 "num_blocks": 26476544, 00:18:54.128 "uuid": "2f198d83-e0fa-4088-8f8c-60d554b8a77a", 00:18:54.128 "assigned_rate_limits": { 00:18:54.128 "rw_ios_per_sec": 0, 00:18:54.128 "rw_mbytes_per_sec": 0, 00:18:54.128 "r_mbytes_per_sec": 0, 00:18:54.128 "w_mbytes_per_sec": 0 00:18:54.128 }, 00:18:54.128 "claimed": false, 00:18:54.128 "zoned": false, 00:18:54.128 "supported_io_types": { 00:18:54.128 "read": true, 00:18:54.128 "write": true, 00:18:54.128 "unmap": true, 00:18:54.128 "flush": false, 00:18:54.128 "reset": true, 00:18:54.128 "nvme_admin": false, 00:18:54.128 "nvme_io": false, 00:18:54.128 "nvme_io_md": false, 00:18:54.128 "write_zeroes": true, 00:18:54.128 "zcopy": false, 00:18:54.128 "get_zone_info": false, 00:18:54.128 "zone_management": false, 00:18:54.128 "zone_append": false, 00:18:54.128 "compare": false, 00:18:54.128 "compare_and_write": false, 00:18:54.128 "abort": false, 00:18:54.128 "seek_hole": true, 00:18:54.128 "seek_data": true, 00:18:54.128 "copy": false, 00:18:54.128 "nvme_iov_md": false 00:18:54.128 }, 00:18:54.128 "driver_specific": { 00:18:54.128 "lvol": { 00:18:54.128 "lvol_store_uuid": "958e7d43-9f1a-48a0-ab03-e6ae2a648144", 00:18:54.128 "base_bdev": "nvme0n1", 00:18:54.128 "thin_provision": true, 00:18:54.128 "num_allocated_clusters": 0, 00:18:54.128 "snapshot": false, 00:18:54.128 "clone": false, 00:18:54.128 "esnap_clone": false 00:18:54.128 } 00:18:54.128 } 00:18:54.128 } 00:18:54.129 ]' 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:54.129 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:54.388 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:54.648 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:54.648 { 00:18:54.648 "name": "2f198d83-e0fa-4088-8f8c-60d554b8a77a", 00:18:54.648 "aliases": [ 00:18:54.648 "lvs/nvme0n1p0" 00:18:54.648 ], 00:18:54.648 "product_name": "Logical Volume", 00:18:54.648 "block_size": 4096, 00:18:54.648 "num_blocks": 26476544, 00:18:54.648 "uuid": "2f198d83-e0fa-4088-8f8c-60d554b8a77a", 00:18:54.649 "assigned_rate_limits": { 00:18:54.649 "rw_ios_per_sec": 0, 00:18:54.649 "rw_mbytes_per_sec": 0, 00:18:54.649 "r_mbytes_per_sec": 0, 00:18:54.649 "w_mbytes_per_sec": 0 00:18:54.649 }, 00:18:54.649 "claimed": false, 00:18:54.649 "zoned": false, 00:18:54.649 "supported_io_types": { 00:18:54.649 "read": true, 00:18:54.649 "write": true, 00:18:54.649 "unmap": true, 00:18:54.649 "flush": false, 00:18:54.649 "reset": true, 00:18:54.649 "nvme_admin": false, 00:18:54.649 "nvme_io": false, 00:18:54.649 "nvme_io_md": false, 00:18:54.649 "write_zeroes": true, 00:18:54.649 "zcopy": false, 00:18:54.649 "get_zone_info": false, 00:18:54.649 "zone_management": false, 00:18:54.649 "zone_append": false, 00:18:54.649 "compare": false, 00:18:54.649 "compare_and_write": false, 00:18:54.649 "abort": false, 00:18:54.649 "seek_hole": true, 00:18:54.649 "seek_data": true, 00:18:54.649 "copy": false, 00:18:54.649 "nvme_iov_md": false 00:18:54.649 }, 00:18:54.649 "driver_specific": { 00:18:54.649 "lvol": { 00:18:54.649 "lvol_store_uuid": "958e7d43-9f1a-48a0-ab03-e6ae2a648144", 00:18:54.649 "base_bdev": "nvme0n1", 00:18:54.649 "thin_provision": true, 00:18:54.649 "num_allocated_clusters": 0, 00:18:54.649 "snapshot": false, 00:18:54.649 "clone": false, 00:18:54.649 "esnap_clone": false 00:18:54.649 } 00:18:54.649 } 00:18:54.649 } 00:18:54.649 ]' 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:54.649 23:03:33 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:54.915 23:03:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:54.915 23:03:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:54.915 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:54.915 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:54.915 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:54.915 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:54.915 23:03:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f198d83-e0fa-4088-8f8c-60d554b8a77a 00:18:54.915 23:03:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:54.916 { 00:18:54.916 "name": "2f198d83-e0fa-4088-8f8c-60d554b8a77a", 00:18:54.916 "aliases": [ 00:18:54.916 "lvs/nvme0n1p0" 00:18:54.916 ], 00:18:54.916 "product_name": "Logical Volume", 00:18:54.916 "block_size": 4096, 00:18:54.916 "num_blocks": 26476544, 00:18:54.916 "uuid": "2f198d83-e0fa-4088-8f8c-60d554b8a77a", 00:18:54.916 "assigned_rate_limits": { 00:18:54.916 "rw_ios_per_sec": 0, 00:18:54.916 "rw_mbytes_per_sec": 0, 00:18:54.916 "r_mbytes_per_sec": 0, 00:18:54.916 "w_mbytes_per_sec": 0 00:18:54.916 }, 00:18:54.916 "claimed": false, 00:18:54.916 "zoned": false, 00:18:54.916 "supported_io_types": { 00:18:54.916 "read": true, 00:18:54.916 "write": true, 00:18:54.916 "unmap": true, 00:18:54.916 "flush": false, 00:18:54.916 "reset": true, 00:18:54.916 "nvme_admin": false, 00:18:54.916 "nvme_io": false, 00:18:54.916 "nvme_io_md": false, 00:18:54.916 "write_zeroes": true, 00:18:54.916 "zcopy": false, 00:18:54.916 "get_zone_info": false, 00:18:54.916 "zone_management": false, 00:18:54.916 "zone_append": false, 00:18:54.916 "compare": false, 00:18:54.916 "compare_and_write": false, 00:18:54.916 "abort": false, 00:18:54.916 "seek_hole": true, 00:18:54.916 "seek_data": true, 00:18:54.916 "copy": false, 00:18:54.916 "nvme_iov_md": false 00:18:54.916 }, 00:18:54.916 "driver_specific": { 00:18:54.916 "lvol": { 00:18:54.916 "lvol_store_uuid": "958e7d43-9f1a-48a0-ab03-e6ae2a648144", 00:18:54.916 "base_bdev": "nvme0n1", 00:18:54.916 "thin_provision": true, 00:18:54.916 "num_allocated_clusters": 0, 00:18:54.916 "snapshot": false, 00:18:54.916 "clone": false, 00:18:54.916 "esnap_clone": false 00:18:54.916 } 00:18:54.916 } 00:18:54.916 } 00:18:54.916 ]' 00:18:54.916 23:03:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:55.177 23:03:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:55.177 23:03:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:55.177 23:03:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:55.177 23:03:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:55.177 23:03:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:55.177 23:03:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:55.177 23:03:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2f198d83-e0fa-4088-8f8c-60d554b8a77a -c nvc0n1p0 --l2p_dram_limit 20 00:18:55.177 [2024-11-26 23:03:34.250676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.177 [2024-11-26 23:03:34.250730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:55.177 [2024-11-26 23:03:34.250743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:55.177 [2024-11-26 23:03:34.250751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.177 [2024-11-26 23:03:34.250797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.177 [2024-11-26 23:03:34.250808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:55.177 [2024-11-26 23:03:34.250815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:55.177 [2024-11-26 23:03:34.250825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.250839] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:55.178 [2024-11-26 23:03:34.251062] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:55.178 [2024-11-26 23:03:34.251074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.251086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:55.178 [2024-11-26 23:03:34.251093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:18:55.178 [2024-11-26 23:03:34.251104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.251126] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2aa1a28d-2a40-46fb-87dc-4b50436c37ff 00:18:55.178 [2024-11-26 23:03:34.252408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.252426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:55.178 [2024-11-26 23:03:34.252436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:55.178 [2024-11-26 23:03:34.252444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.259275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.259310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:55.178 [2024-11-26 23:03:34.259322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.768 ms 00:18:55.178 [2024-11-26 23:03:34.259329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.259402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.259412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:55.178 [2024-11-26 23:03:34.259424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:55.178 [2024-11-26 23:03:34.259430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.259464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.259472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:55.178 [2024-11-26 23:03:34.259482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:55.178 [2024-11-26 23:03:34.259488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.259506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:55.178 [2024-11-26 23:03:34.261134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.261164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:55.178 [2024-11-26 23:03:34.261173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:18:55.178 [2024-11-26 23:03:34.261181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.261213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.261224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:55.178 [2024-11-26 23:03:34.261231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:55.178 [2024-11-26 23:03:34.261240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.261253] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:55.178 [2024-11-26 23:03:34.261379] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:55.178 [2024-11-26 23:03:34.261388] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:55.178 [2024-11-26 23:03:34.261402] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:55.178 [2024-11-26 23:03:34.261410] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261418] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:55.178 [2024-11-26 23:03:34.261434] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:55.178 [2024-11-26 23:03:34.261442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:55.178 [2024-11-26 23:03:34.261450] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:55.178 [2024-11-26 23:03:34.261456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.261463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:55.178 [2024-11-26 23:03:34.261469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:18:55.178 [2024-11-26 23:03:34.261480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.261541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.178 [2024-11-26 23:03:34.261549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:55.178 [2024-11-26 23:03:34.261554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:55.178 [2024-11-26 23:03:34.261561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.178 [2024-11-26 23:03:34.261632] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:55.178 [2024-11-26 23:03:34.261641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:55.178 [2024-11-26 23:03:34.261647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:55.178 [2024-11-26 23:03:34.261678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:55.178 [2024-11-26 23:03:34.261695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:55.178 [2024-11-26 23:03:34.261706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:55.178 [2024-11-26 23:03:34.261715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:55.178 [2024-11-26 23:03:34.261719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:55.178 [2024-11-26 23:03:34.261726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:55.178 [2024-11-26 23:03:34.261731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:55.178 [2024-11-26 23:03:34.261737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:55.178 [2024-11-26 23:03:34.261748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:55.178 [2024-11-26 23:03:34.261764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:55.178 [2024-11-26 23:03:34.261781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:55.178 [2024-11-26 23:03:34.261797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:55.178 [2024-11-26 23:03:34.261817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:55.178 [2024-11-26 23:03:34.261833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:55.178 [2024-11-26 23:03:34.261845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:55.178 [2024-11-26 23:03:34.261851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:55.178 [2024-11-26 23:03:34.261856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:55.178 [2024-11-26 23:03:34.261863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:55.178 [2024-11-26 23:03:34.261870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:55.178 [2024-11-26 23:03:34.261876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:55.178 [2024-11-26 23:03:34.261888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:55.178 [2024-11-26 23:03:34.261893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:55.178 [2024-11-26 23:03:34.261909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:55.178 [2024-11-26 23:03:34.261918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.178 [2024-11-26 23:03:34.261932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:55.178 [2024-11-26 23:03:34.261937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:55.178 [2024-11-26 23:03:34.261943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:55.178 [2024-11-26 23:03:34.261948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:55.178 [2024-11-26 23:03:34.261956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:55.178 [2024-11-26 23:03:34.261961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:55.178 [2024-11-26 23:03:34.261971] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:55.179 [2024-11-26 23:03:34.261978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:55.179 [2024-11-26 23:03:34.261988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:55.179 [2024-11-26 23:03:34.261993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:55.179 [2024-11-26 23:03:34.262000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:55.179 [2024-11-26 23:03:34.262005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:55.179 [2024-11-26 23:03:34.262013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:55.179 [2024-11-26 23:03:34.262018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:55.179 [2024-11-26 23:03:34.262025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:55.179 [2024-11-26 23:03:34.262030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:55.179 [2024-11-26 23:03:34.262037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:55.179 [2024-11-26 23:03:34.262042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:55.179 [2024-11-26 23:03:34.262049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:55.179 [2024-11-26 23:03:34.262054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:55.179 [2024-11-26 23:03:34.262060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:55.179 [2024-11-26 23:03:34.262066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:55.179 [2024-11-26 23:03:34.262072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:55.179 [2024-11-26 23:03:34.262080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:55.179 [2024-11-26 23:03:34.262087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:55.179 [2024-11-26 23:03:34.262093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:55.179 [2024-11-26 23:03:34.262100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:55.179 [2024-11-26 23:03:34.262106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:55.179 [2024-11-26 23:03:34.262116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.179 [2024-11-26 23:03:34.262122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:55.179 [2024-11-26 23:03:34.262131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:18:55.179 [2024-11-26 23:03:34.262137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.179 [2024-11-26 23:03:34.262162] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:55.179 [2024-11-26 23:03:34.262168] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:59.380 [2024-11-26 23:03:38.154502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.154581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:59.380 [2024-11-26 23:03:38.154600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3892.321 ms 00:18:59.380 [2024-11-26 23:03:38.154610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.168458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.168505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.380 [2024-11-26 23:03:38.168530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.712 ms 00:18:59.380 [2024-11-26 23:03:38.168541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.168632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.168644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.380 [2024-11-26 23:03:38.168656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:59.380 [2024-11-26 23:03:38.168664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.196148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.196243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.380 [2024-11-26 23:03:38.196284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.420 ms 00:18:59.380 [2024-11-26 23:03:38.196340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.196433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.196463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.380 [2024-11-26 23:03:38.196495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:59.380 [2024-11-26 23:03:38.196520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.197256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.197351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.380 [2024-11-26 23:03:38.197387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:18:59.380 [2024-11-26 23:03:38.197409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.197698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.197743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.380 [2024-11-26 23:03:38.197772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:18:59.380 [2024-11-26 23:03:38.197793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.206011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.206053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.380 [2024-11-26 23:03:38.206067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.173 ms 00:18:59.380 [2024-11-26 23:03:38.206075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.216336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:59.380 [2024-11-26 23:03:38.224136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.224178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.380 [2024-11-26 23:03:38.224190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.996 ms 00:18:59.380 [2024-11-26 23:03:38.224205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.313372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.313425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:59.380 [2024-11-26 23:03:38.313438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.140 ms 00:18:59.380 [2024-11-26 23:03:38.313453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.313649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.313664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.380 [2024-11-26 23:03:38.313675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:18:59.380 [2024-11-26 23:03:38.313686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.319228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.319279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:59.380 [2024-11-26 23:03:38.319307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.507 ms 00:18:59.380 [2024-11-26 23:03:38.319321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.324325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.324373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:59.380 [2024-11-26 23:03:38.324384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:18:59.380 [2024-11-26 23:03:38.324395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.324732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.324750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.380 [2024-11-26 23:03:38.324764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:59.380 [2024-11-26 23:03:38.324774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.375927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.375977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:59.380 [2024-11-26 23:03:38.375990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.134 ms 00:18:59.380 [2024-11-26 23:03:38.376004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.383358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.383405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:59.380 [2024-11-26 23:03:38.383416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.282 ms 00:18:59.380 [2024-11-26 23:03:38.383428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.389237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.389287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:59.380 [2024-11-26 23:03:38.389315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.765 ms 00:18:59.380 [2024-11-26 23:03:38.389326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.398625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.398739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.380 [2024-11-26 23:03:38.398762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.250 ms 00:18:59.380 [2024-11-26 23:03:38.398778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.398859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.398878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.380 [2024-11-26 23:03:38.398893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:59.380 [2024-11-26 23:03:38.398909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.399022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.380 [2024-11-26 23:03:38.399048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.380 [2024-11-26 23:03:38.399062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:59.380 [2024-11-26 23:03:38.399082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.380 [2024-11-26 23:03:38.400896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4149.472 ms, result 0 00:18:59.380 { 00:18:59.380 "name": "ftl0", 00:18:59.380 "uuid": "2aa1a28d-2a40-46fb-87dc-4b50436c37ff" 00:18:59.380 } 00:18:59.380 23:03:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:59.380 23:03:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:59.380 23:03:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:59.641 23:03:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:59.641 [2024-11-26 23:03:38.722074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:59.641 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:59.641 Zero copy mechanism will not be used. 00:18:59.641 Running I/O for 4 seconds... 00:19:01.602 656.00 IOPS, 43.56 MiB/s [2024-11-26T23:03:42.115Z] 667.00 IOPS, 44.29 MiB/s [2024-11-26T23:03:43.052Z] 690.00 IOPS, 45.82 MiB/s [2024-11-26T23:03:43.052Z] 694.75 IOPS, 46.14 MiB/s 00:19:03.925 Latency(us) 00:19:03.925 [2024-11-26T23:03:43.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.925 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:03.925 ftl0 : 4.00 694.76 46.14 0.00 0.00 1533.73 222.13 3276.80 00:19:03.925 [2024-11-26T23:03:43.052Z] =================================================================================================================== 00:19:03.925 [2024-11-26T23:03:43.052Z] Total : 694.76 46.14 0.00 0.00 1533.73 222.13 3276.80 00:19:03.925 [2024-11-26 23:03:42.729718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:03.925 { 00:19:03.925 "results": [ 00:19:03.925 { 00:19:03.925 "job": "ftl0", 00:19:03.925 "core_mask": "0x1", 00:19:03.925 "workload": "randwrite", 00:19:03.925 "status": "finished", 00:19:03.925 "queue_depth": 1, 00:19:03.925 "io_size": 69632, 00:19:03.925 "runtime": 4.001357, 00:19:03.925 "iops": 694.7643012108142, 00:19:03.925 "mibps": 46.13669187728063, 00:19:03.925 "io_failed": 0, 00:19:03.925 "io_timeout": 0, 00:19:03.925 "avg_latency_us": 1533.7346762589927, 00:19:03.925 "min_latency_us": 222.12923076923076, 00:19:03.925 "max_latency_us": 3276.8 00:19:03.925 } 00:19:03.925 ], 00:19:03.925 "core_count": 1 00:19:03.925 } 00:19:03.925 23:03:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:03.925 [2024-11-26 23:03:42.834750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:03.925 Running I/O for 4 seconds... 00:19:05.806 8259.00 IOPS, 32.26 MiB/s [2024-11-26T23:03:45.875Z] 6802.50 IOPS, 26.57 MiB/s [2024-11-26T23:03:47.313Z] 6216.67 IOPS, 24.28 MiB/s [2024-11-26T23:03:47.313Z] 5941.75 IOPS, 23.21 MiB/s 00:19:08.186 Latency(us) 00:19:08.186 [2024-11-26T23:03:47.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.186 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:08.186 ftl0 : 4.03 5925.87 23.15 0.00 0.00 21516.56 285.14 44362.83 00:19:08.186 [2024-11-26T23:03:47.313Z] =================================================================================================================== 00:19:08.186 [2024-11-26T23:03:47.313Z] Total : 5925.87 23.15 0.00 0.00 21516.56 0.00 44362.83 00:19:08.186 { 00:19:08.186 "results": [ 00:19:08.186 { 00:19:08.186 "job": "ftl0", 00:19:08.186 "core_mask": "0x1", 00:19:08.186 "workload": "randwrite", 00:19:08.186 "status": "finished", 00:19:08.186 "queue_depth": 128, 00:19:08.186 "io_size": 4096, 00:19:08.186 "runtime": 4.031138, 00:19:08.186 "iops": 5925.8700644830315, 00:19:08.186 "mibps": 23.14792993938684, 00:19:08.186 "io_failed": 0, 00:19:08.186 "io_timeout": 0, 00:19:08.186 "avg_latency_us": 21516.562980060797, 00:19:08.186 "min_latency_us": 285.1446153846154, 00:19:08.186 "max_latency_us": 44362.83076923077 00:19:08.186 } 00:19:08.186 ], 00:19:08.186 "core_count": 1 00:19:08.186 } 00:19:08.186 [2024-11-26 23:03:46.872992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:08.186 23:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:08.186 [2024-11-26 23:03:46.989930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.186 Running I/O for 4 seconds... 00:19:10.076 4623.00 IOPS, 18.06 MiB/s [2024-11-26T23:03:50.144Z] 4558.00 IOPS, 17.80 MiB/s [2024-11-26T23:03:51.085Z] 4606.67 IOPS, 17.99 MiB/s [2024-11-26T23:03:51.085Z] 4657.00 IOPS, 18.19 MiB/s 00:19:11.958 Latency(us) 00:19:11.958 [2024-11-26T23:03:51.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.958 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:11.958 Verification LBA range: start 0x0 length 0x1400000 00:19:11.958 ftl0 : 4.02 4670.32 18.24 0.00 0.00 27324.77 300.90 44362.83 00:19:11.958 [2024-11-26T23:03:51.085Z] =================================================================================================================== 00:19:11.958 [2024-11-26T23:03:51.085Z] Total : 4670.32 18.24 0.00 0.00 27324.77 0.00 44362.83 00:19:11.958 { 00:19:11.958 "results": [ 00:19:11.958 { 00:19:11.958 "job": "ftl0", 00:19:11.958 "core_mask": "0x1", 00:19:11.958 "workload": "verify", 00:19:11.958 "status": "finished", 00:19:11.958 "verify_range": { 00:19:11.958 "start": 0, 00:19:11.958 "length": 20971520 00:19:11.958 }, 00:19:11.958 "queue_depth": 128, 00:19:11.958 "io_size": 4096, 00:19:11.958 "runtime": 4.015786, 00:19:11.958 "iops": 4670.318587693667, 00:19:11.958 "mibps": 18.243431983178386, 00:19:11.958 "io_failed": 0, 00:19:11.958 "io_timeout": 0, 00:19:11.958 "avg_latency_us": 27324.774167955216, 00:19:11.958 "min_latency_us": 300.89846153846156, 00:19:11.958 "max_latency_us": 44362.83076923077 00:19:11.958 } 00:19:11.958 ], 00:19:11.958 "core_count": 1 00:19:11.958 } 00:19:11.958 [2024-11-26 23:03:51.018368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:11.958 23:03:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:12.217 [2024-11-26 23:03:51.189579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.217 [2024-11-26 23:03:51.189630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.217 [2024-11-26 23:03:51.189644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.217 [2024-11-26 23:03:51.189657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.217 [2024-11-26 23:03:51.189679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.217 [2024-11-26 23:03:51.190222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.217 [2024-11-26 23:03:51.190245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.217 [2024-11-26 23:03:51.190256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:19:12.217 [2024-11-26 23:03:51.190264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.217 [2024-11-26 23:03:51.192837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.217 [2024-11-26 23:03:51.192869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.217 [2024-11-26 23:03:51.192886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:19:12.217 [2024-11-26 23:03:51.192894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.478 [2024-11-26 23:03:51.386857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.386903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.479 [2024-11-26 23:03:51.386919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 193.943 ms 00:19:12.479 [2024-11-26 23:03:51.386933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.393228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.393256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.479 [2024-11-26 23:03:51.393268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.262 ms 00:19:12.479 [2024-11-26 23:03:51.393277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.395504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.395534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.479 [2024-11-26 23:03:51.395546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:19:12.479 [2024-11-26 23:03:51.395553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.400464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.400496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.479 [2024-11-26 23:03:51.400514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.878 ms 00:19:12.479 [2024-11-26 23:03:51.400527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.400636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.400646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.479 [2024-11-26 23:03:51.400656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:12.479 [2024-11-26 23:03:51.400664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.403176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.403207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.479 [2024-11-26 23:03:51.403218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:19:12.479 [2024-11-26 23:03:51.403225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.405445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.405473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.479 [2024-11-26 23:03:51.405484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:19:12.479 [2024-11-26 23:03:51.405491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.407428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.407457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.479 [2024-11-26 23:03:51.407471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:19:12.479 [2024-11-26 23:03:51.407477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.409212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.479 [2024-11-26 23:03:51.409242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.479 [2024-11-26 23:03:51.409253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:19:12.479 [2024-11-26 23:03:51.409260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.479 [2024-11-26 23:03:51.409291] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.479 [2024-11-26 23:03:51.409318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.479 [2024-11-26 23:03:51.409812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.409997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.480 [2024-11-26 23:03:51.410197] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.480 [2024-11-26 23:03:51.410210] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2aa1a28d-2a40-46fb-87dc-4b50436c37ff 00:19:12.480 [2024-11-26 23:03:51.410218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.480 [2024-11-26 23:03:51.410229] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.480 [2024-11-26 23:03:51.410239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.480 [2024-11-26 23:03:51.410250] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.480 [2024-11-26 23:03:51.410257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.480 [2024-11-26 23:03:51.410266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.480 [2024-11-26 23:03:51.410276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.480 [2024-11-26 23:03:51.410284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.480 [2024-11-26 23:03:51.410291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.480 [2024-11-26 23:03:51.410311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.480 [2024-11-26 23:03:51.410318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.480 [2024-11-26 23:03:51.410330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:19:12.480 [2024-11-26 23:03:51.410337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.412182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.480 [2024-11-26 23:03:51.412206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.480 [2024-11-26 23:03:51.412219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:19:12.480 [2024-11-26 23:03:51.412227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.412351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.480 [2024-11-26 23:03:51.412362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.480 [2024-11-26 23:03:51.412376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:12.480 [2024-11-26 23:03:51.412384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.418765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.418798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.480 [2024-11-26 23:03:51.418813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.418822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.418883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.418892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.480 [2024-11-26 23:03:51.418902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.418909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.418970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.418980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.480 [2024-11-26 23:03:51.418990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.418997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.419014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.419023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.480 [2024-11-26 23:03:51.419035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.419042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.430534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.430577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.480 [2024-11-26 23:03:51.430590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.430598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.440405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.440450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.480 [2024-11-26 23:03:51.440462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.440470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.440537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.440547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.480 [2024-11-26 23:03:51.440558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.440566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.440599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.440608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.480 [2024-11-26 23:03:51.440621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.440630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.440703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.440716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.480 [2024-11-26 23:03:51.440726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.480 [2024-11-26 23:03:51.440733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.480 [2024-11-26 23:03:51.440772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.480 [2024-11-26 23:03:51.440781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.480 [2024-11-26 23:03:51.440791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.481 [2024-11-26 23:03:51.440801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.481 [2024-11-26 23:03:51.440839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.481 [2024-11-26 23:03:51.440849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.481 [2024-11-26 23:03:51.440858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.481 [2024-11-26 23:03:51.440866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.481 [2024-11-26 23:03:51.440916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.481 [2024-11-26 23:03:51.440925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.481 [2024-11-26 23:03:51.440937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.481 [2024-11-26 23:03:51.440950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.481 [2024-11-26 23:03:51.441085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 251.457 ms, result 0 00:19:12.481 true 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88852 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88852 ']' 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88852 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88852 00:19:12.481 killing process with pid 88852 00:19:12.481 Received shutdown signal, test time was about 4.000000 seconds 00:19:12.481 00:19:12.481 Latency(us) 00:19:12.481 [2024-11-26T23:03:51.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.481 [2024-11-26T23:03:51.608Z] =================================================================================================================== 00:19:12.481 [2024-11-26T23:03:51.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88852' 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88852 00:19:12.481 23:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88852 00:19:17.764 Remove shared memory files 00:19:17.764 23:03:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:17.764 23:03:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:17.764 23:03:56 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:17.764 23:03:56 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:17.764 23:03:56 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:17.764 23:03:56 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:17.765 23:03:56 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:17.765 23:03:56 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:17.765 00:19:17.765 real 0m25.878s 00:19:17.765 user 0m28.181s 00:19:17.765 sys 0m1.076s 00:19:17.765 23:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.765 ************************************ 00:19:17.765 END TEST ftl_bdevperf 00:19:17.765 ************************************ 00:19:17.765 23:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:17.765 23:03:56 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:17.765 23:03:56 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:17.765 23:03:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.765 23:03:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:17.765 ************************************ 00:19:17.765 START TEST ftl_trim 00:19:17.765 ************************************ 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:17.765 * Looking for test storage... 00:19:17.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.765 23:03:56 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.765 --rc genhtml_branch_coverage=1 00:19:17.765 --rc genhtml_function_coverage=1 00:19:17.765 --rc genhtml_legend=1 00:19:17.765 --rc geninfo_all_blocks=1 00:19:17.765 --rc geninfo_unexecuted_blocks=1 00:19:17.765 00:19:17.765 ' 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.765 --rc genhtml_branch_coverage=1 00:19:17.765 --rc genhtml_function_coverage=1 00:19:17.765 --rc genhtml_legend=1 00:19:17.765 --rc geninfo_all_blocks=1 00:19:17.765 --rc geninfo_unexecuted_blocks=1 00:19:17.765 00:19:17.765 ' 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.765 --rc genhtml_branch_coverage=1 00:19:17.765 --rc genhtml_function_coverage=1 00:19:17.765 --rc genhtml_legend=1 00:19:17.765 --rc geninfo_all_blocks=1 00:19:17.765 --rc geninfo_unexecuted_blocks=1 00:19:17.765 00:19:17.765 ' 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.765 --rc genhtml_branch_coverage=1 00:19:17.765 --rc genhtml_function_coverage=1 00:19:17.765 --rc genhtml_legend=1 00:19:17.765 --rc geninfo_all_blocks=1 00:19:17.765 --rc geninfo_unexecuted_blocks=1 00:19:17.765 00:19:17.765 ' 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89199 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89199 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89199 ']' 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.765 23:03:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:17.765 23:03:56 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:17.765 [2024-11-26 23:03:56.566942] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:19:17.765 [2024-11-26 23:03:56.567109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89199 ] 00:19:17.765 [2024-11-26 23:03:56.709631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:17.765 [2024-11-26 23:03:56.737619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:17.765 [2024-11-26 23:03:56.769858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.766 [2024-11-26 23:03:56.770183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.766 [2024-11-26 23:03:56.770231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.338 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.338 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:18.338 23:03:57 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:18.338 23:03:57 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:18.338 23:03:57 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:18.338 23:03:57 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:18.338 23:03:57 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:18.338 23:03:57 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:18.910 23:03:57 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:18.910 23:03:57 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:18.910 23:03:57 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:18.910 { 00:19:18.910 "name": "nvme0n1", 00:19:18.910 "aliases": [ 00:19:18.910 "c1393cc0-a09d-45bd-90e5-47ca69be8a8a" 00:19:18.910 ], 00:19:18.910 "product_name": "NVMe disk", 00:19:18.910 "block_size": 4096, 00:19:18.910 "num_blocks": 1310720, 00:19:18.910 "uuid": "c1393cc0-a09d-45bd-90e5-47ca69be8a8a", 00:19:18.910 "numa_id": -1, 00:19:18.910 "assigned_rate_limits": { 00:19:18.910 "rw_ios_per_sec": 0, 00:19:18.910 "rw_mbytes_per_sec": 0, 00:19:18.910 "r_mbytes_per_sec": 0, 00:19:18.910 "w_mbytes_per_sec": 0 00:19:18.910 }, 00:19:18.910 "claimed": true, 00:19:18.910 "claim_type": "read_many_write_one", 00:19:18.910 "zoned": false, 00:19:18.910 "supported_io_types": { 00:19:18.910 "read": true, 00:19:18.910 "write": true, 00:19:18.910 "unmap": true, 00:19:18.910 "flush": true, 00:19:18.910 "reset": true, 00:19:18.910 "nvme_admin": true, 00:19:18.910 "nvme_io": true, 00:19:18.910 "nvme_io_md": false, 00:19:18.910 "write_zeroes": true, 00:19:18.910 "zcopy": false, 00:19:18.910 "get_zone_info": false, 00:19:18.910 "zone_management": false, 00:19:18.910 "zone_append": false, 00:19:18.910 "compare": true, 00:19:18.910 "compare_and_write": false, 00:19:18.910 "abort": true, 00:19:18.910 "seek_hole": false, 00:19:18.910 "seek_data": false, 00:19:18.910 "copy": true, 00:19:18.910 "nvme_iov_md": false 00:19:18.910 }, 00:19:18.910 "driver_specific": { 00:19:18.910 "nvme": [ 00:19:18.910 { 00:19:18.910 "pci_address": "0000:00:11.0", 00:19:18.910 "trid": { 00:19:18.910 "trtype": "PCIe", 00:19:18.910 "traddr": "0000:00:11.0" 00:19:18.910 }, 00:19:18.910 "ctrlr_data": { 00:19:18.910 "cntlid": 0, 00:19:18.910 "vendor_id": "0x1b36", 00:19:18.910 "model_number": "QEMU NVMe Ctrl", 00:19:18.910 "serial_number": "12341", 00:19:18.910 "firmware_revision": "8.0.0", 00:19:18.910 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:18.910 "oacs": { 00:19:18.910 "security": 0, 00:19:18.910 "format": 1, 00:19:18.910 "firmware": 0, 00:19:18.910 "ns_manage": 1 00:19:18.910 }, 00:19:18.910 "multi_ctrlr": false, 00:19:18.910 "ana_reporting": false 00:19:18.910 }, 00:19:18.910 "vs": { 00:19:18.910 "nvme_version": "1.4" 00:19:18.910 }, 00:19:18.910 "ns_data": { 00:19:18.910 "id": 1, 00:19:18.910 "can_share": false 00:19:18.910 } 00:19:18.910 } 00:19:18.910 ], 00:19:18.910 "mp_policy": "active_passive" 00:19:18.910 } 00:19:18.910 } 00:19:18.910 ]' 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:18.910 23:03:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:18.910 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:18.910 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:18.910 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:18.910 23:03:58 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:18.910 23:03:58 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:18.910 23:03:58 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:18.910 23:03:58 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:18.910 23:03:58 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:19.172 23:03:58 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=958e7d43-9f1a-48a0-ab03-e6ae2a648144 00:19:19.173 23:03:58 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:19.173 23:03:58 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 958e7d43-9f1a-48a0-ab03-e6ae2a648144 00:19:19.434 23:03:58 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:19.704 23:03:58 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b40ef967-a999-4c9c-9fd3-ba4bec159d18 00:19:19.704 23:03:58 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b40ef967-a999-4c9c-9fd3-ba4bec159d18 00:19:19.975 23:03:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=a5f9a025-2742-41d7-be3b-3930d9886864 00:19:19.975 23:03:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a5f9a025-2742-41d7-be3b-3930d9886864 00:19:19.975 23:03:58 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:19.975 23:03:58 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:19.975 23:03:58 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=a5f9a025-2742-41d7-be3b-3930d9886864 00:19:19.975 23:03:58 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:19.975 23:03:58 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size a5f9a025-2742-41d7-be3b-3930d9886864 00:19:19.975 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a5f9a025-2742-41d7-be3b-3930d9886864 00:19:19.975 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:19.975 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:19.975 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:19.975 23:03:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5f9a025-2742-41d7-be3b-3930d9886864 00:19:20.237 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:20.237 { 00:19:20.237 "name": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:20.237 "aliases": [ 00:19:20.237 "lvs/nvme0n1p0" 00:19:20.237 ], 00:19:20.237 "product_name": "Logical Volume", 00:19:20.237 "block_size": 4096, 00:19:20.237 "num_blocks": 26476544, 00:19:20.237 "uuid": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:20.237 "assigned_rate_limits": { 00:19:20.237 "rw_ios_per_sec": 0, 00:19:20.237 "rw_mbytes_per_sec": 0, 00:19:20.237 "r_mbytes_per_sec": 0, 00:19:20.237 "w_mbytes_per_sec": 0 00:19:20.237 }, 00:19:20.237 "claimed": false, 00:19:20.237 "zoned": false, 00:19:20.237 "supported_io_types": { 00:19:20.237 "read": true, 00:19:20.237 "write": true, 00:19:20.237 "unmap": true, 00:19:20.237 "flush": false, 00:19:20.237 "reset": true, 00:19:20.237 "nvme_admin": false, 00:19:20.237 "nvme_io": false, 00:19:20.237 "nvme_io_md": false, 00:19:20.237 "write_zeroes": true, 00:19:20.237 "zcopy": false, 00:19:20.237 "get_zone_info": false, 00:19:20.237 "zone_management": false, 00:19:20.237 "zone_append": false, 00:19:20.237 "compare": false, 00:19:20.237 "compare_and_write": false, 00:19:20.237 "abort": false, 00:19:20.237 "seek_hole": true, 00:19:20.237 "seek_data": true, 00:19:20.237 "copy": false, 00:19:20.237 "nvme_iov_md": false 00:19:20.237 }, 00:19:20.237 "driver_specific": { 00:19:20.237 "lvol": { 00:19:20.237 "lvol_store_uuid": "b40ef967-a999-4c9c-9fd3-ba4bec159d18", 00:19:20.237 "base_bdev": "nvme0n1", 00:19:20.237 "thin_provision": true, 00:19:20.237 "num_allocated_clusters": 0, 00:19:20.237 "snapshot": false, 00:19:20.237 "clone": false, 00:19:20.237 "esnap_clone": false 00:19:20.237 } 00:19:20.237 } 00:19:20.237 } 00:19:20.237 ]' 00:19:20.237 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:20.237 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:20.237 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:20.237 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:20.237 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:20.237 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:20.237 23:03:59 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:20.237 23:03:59 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:20.237 23:03:59 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:20.498 23:03:59 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:20.498 23:03:59 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:20.498 23:03:59 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size a5f9a025-2742-41d7-be3b-3930d9886864 00:19:20.498 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a5f9a025-2742-41d7-be3b-3930d9886864 00:19:20.498 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:20.498 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:20.498 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:20.498 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5f9a025-2742-41d7-be3b-3930d9886864 00:19:20.757 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:20.757 { 00:19:20.757 "name": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:20.757 "aliases": [ 00:19:20.757 "lvs/nvme0n1p0" 00:19:20.757 ], 00:19:20.757 "product_name": "Logical Volume", 00:19:20.757 "block_size": 4096, 00:19:20.757 "num_blocks": 26476544, 00:19:20.757 "uuid": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:20.757 "assigned_rate_limits": { 00:19:20.757 "rw_ios_per_sec": 0, 00:19:20.757 "rw_mbytes_per_sec": 0, 00:19:20.757 "r_mbytes_per_sec": 0, 00:19:20.757 "w_mbytes_per_sec": 0 00:19:20.757 }, 00:19:20.757 "claimed": false, 00:19:20.757 "zoned": false, 00:19:20.757 "supported_io_types": { 00:19:20.757 "read": true, 00:19:20.757 "write": true, 00:19:20.757 "unmap": true, 00:19:20.757 "flush": false, 00:19:20.757 "reset": true, 00:19:20.757 "nvme_admin": false, 00:19:20.757 "nvme_io": false, 00:19:20.757 "nvme_io_md": false, 00:19:20.757 "write_zeroes": true, 00:19:20.757 "zcopy": false, 00:19:20.757 "get_zone_info": false, 00:19:20.757 "zone_management": false, 00:19:20.757 "zone_append": false, 00:19:20.757 "compare": false, 00:19:20.757 "compare_and_write": false, 00:19:20.757 "abort": false, 00:19:20.757 "seek_hole": true, 00:19:20.757 "seek_data": true, 00:19:20.757 "copy": false, 00:19:20.757 "nvme_iov_md": false 00:19:20.757 }, 00:19:20.757 "driver_specific": { 00:19:20.757 "lvol": { 00:19:20.757 "lvol_store_uuid": "b40ef967-a999-4c9c-9fd3-ba4bec159d18", 00:19:20.757 "base_bdev": "nvme0n1", 00:19:20.757 "thin_provision": true, 00:19:20.757 "num_allocated_clusters": 0, 00:19:20.757 "snapshot": false, 00:19:20.757 "clone": false, 00:19:20.757 "esnap_clone": false 00:19:20.757 } 00:19:20.757 } 00:19:20.757 } 00:19:20.757 ]' 00:19:20.757 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:20.757 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:20.757 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:20.757 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:20.757 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:20.757 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:20.757 23:03:59 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:20.757 23:03:59 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:21.014 23:03:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:21.014 23:03:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:21.014 23:03:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size a5f9a025-2742-41d7-be3b-3930d9886864 00:19:21.014 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a5f9a025-2742-41d7-be3b-3930d9886864 00:19:21.014 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.014 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:21.014 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:21.014 23:03:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5f9a025-2742-41d7-be3b-3930d9886864 00:19:21.271 23:04:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.271 { 00:19:21.271 "name": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:21.271 "aliases": [ 00:19:21.271 "lvs/nvme0n1p0" 00:19:21.271 ], 00:19:21.271 "product_name": "Logical Volume", 00:19:21.271 "block_size": 4096, 00:19:21.271 "num_blocks": 26476544, 00:19:21.271 "uuid": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:21.271 "assigned_rate_limits": { 00:19:21.271 "rw_ios_per_sec": 0, 00:19:21.271 "rw_mbytes_per_sec": 0, 00:19:21.271 "r_mbytes_per_sec": 0, 00:19:21.271 "w_mbytes_per_sec": 0 00:19:21.271 }, 00:19:21.271 "claimed": false, 00:19:21.271 "zoned": false, 00:19:21.271 "supported_io_types": { 00:19:21.271 "read": true, 00:19:21.271 "write": true, 00:19:21.271 "unmap": true, 00:19:21.271 "flush": false, 00:19:21.271 "reset": true, 00:19:21.271 "nvme_admin": false, 00:19:21.271 "nvme_io": false, 00:19:21.271 "nvme_io_md": false, 00:19:21.271 "write_zeroes": true, 00:19:21.271 "zcopy": false, 00:19:21.271 "get_zone_info": false, 00:19:21.271 "zone_management": false, 00:19:21.271 "zone_append": false, 00:19:21.271 "compare": false, 00:19:21.271 "compare_and_write": false, 00:19:21.271 "abort": false, 00:19:21.271 "seek_hole": true, 00:19:21.271 "seek_data": true, 00:19:21.271 "copy": false, 00:19:21.271 "nvme_iov_md": false 00:19:21.271 }, 00:19:21.271 "driver_specific": { 00:19:21.271 "lvol": { 00:19:21.271 "lvol_store_uuid": "b40ef967-a999-4c9c-9fd3-ba4bec159d18", 00:19:21.271 "base_bdev": "nvme0n1", 00:19:21.272 "thin_provision": true, 00:19:21.272 "num_allocated_clusters": 0, 00:19:21.272 "snapshot": false, 00:19:21.272 "clone": false, 00:19:21.272 "esnap_clone": false 00:19:21.272 } 00:19:21.272 } 00:19:21.272 } 00:19:21.272 ]' 00:19:21.272 23:04:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.272 23:04:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.272 23:04:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.272 23:04:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:21.272 23:04:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:21.272 23:04:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:21.272 23:04:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:21.272 23:04:00 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a5f9a025-2742-41d7-be3b-3930d9886864 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:21.534 [2024-11-26 23:04:00.424663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.424699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:21.534 [2024-11-26 23:04:00.424713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:21.534 [2024-11-26 23:04:00.424719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.426625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.426746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.534 [2024-11-26 23:04:00.426764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:19:21.534 [2024-11-26 23:04:00.426772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.426883] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:21.534 [2024-11-26 23:04:00.427069] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:21.534 [2024-11-26 23:04:00.427084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.427091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.534 [2024-11-26 23:04:00.427099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:19:21.534 [2024-11-26 23:04:00.427105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.427193] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:19:21.534 [2024-11-26 23:04:00.428185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.428211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:21.534 [2024-11-26 23:04:00.428219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:21.534 [2024-11-26 23:04:00.428229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.433326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.433426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.534 [2024-11-26 23:04:00.433437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.014 ms 00:19:21.534 [2024-11-26 23:04:00.433446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.433556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.433566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.534 [2024-11-26 23:04:00.433575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:21.534 [2024-11-26 23:04:00.433583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.433612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.433619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:21.534 [2024-11-26 23:04:00.433625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:21.534 [2024-11-26 23:04:00.433633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.433663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:21.534 [2024-11-26 23:04:00.434924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.434948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.534 [2024-11-26 23:04:00.434957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:19:21.534 [2024-11-26 23:04:00.434964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.435011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.435019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:21.534 [2024-11-26 23:04:00.435029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:21.534 [2024-11-26 23:04:00.435045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.435073] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:21.534 [2024-11-26 23:04:00.435178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:21.534 [2024-11-26 23:04:00.435190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:21.534 [2024-11-26 23:04:00.435200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:21.534 [2024-11-26 23:04:00.435209] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435217] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435226] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:21.534 [2024-11-26 23:04:00.435233] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:21.534 [2024-11-26 23:04:00.435244] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:21.534 [2024-11-26 23:04:00.435250] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:21.534 [2024-11-26 23:04:00.435258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.435264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:21.534 [2024-11-26 23:04:00.435271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:19:21.534 [2024-11-26 23:04:00.435276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.435369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.534 [2024-11-26 23:04:00.435376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:21.534 [2024-11-26 23:04:00.435384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:21.534 [2024-11-26 23:04:00.435389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.534 [2024-11-26 23:04:00.435512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:21.534 [2024-11-26 23:04:00.435520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:21.534 [2024-11-26 23:04:00.435536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:21.534 [2024-11-26 23:04:00.435553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:21.534 [2024-11-26 23:04:00.435573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:21.534 [2024-11-26 23:04:00.435585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:21.534 [2024-11-26 23:04:00.435591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:21.534 [2024-11-26 23:04:00.435599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:21.534 [2024-11-26 23:04:00.435604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:21.534 [2024-11-26 23:04:00.435610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:21.534 [2024-11-26 23:04:00.435615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:21.534 [2024-11-26 23:04:00.435627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:21.534 [2024-11-26 23:04:00.435645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:21.534 [2024-11-26 23:04:00.435665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:21.534 [2024-11-26 23:04:00.435682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:21.534 [2024-11-26 23:04:00.435700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.534 [2024-11-26 23:04:00.435710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:21.534 [2024-11-26 23:04:00.435717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:21.534 [2024-11-26 23:04:00.435721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:21.534 [2024-11-26 23:04:00.435727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:21.534 [2024-11-26 23:04:00.435732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:21.534 [2024-11-26 23:04:00.435739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:21.534 [2024-11-26 23:04:00.435744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:21.534 [2024-11-26 23:04:00.435750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:21.535 [2024-11-26 23:04:00.435755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.535 [2024-11-26 23:04:00.435761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:21.535 [2024-11-26 23:04:00.435766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:21.535 [2024-11-26 23:04:00.435772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.535 [2024-11-26 23:04:00.435777] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:21.535 [2024-11-26 23:04:00.435786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:21.535 [2024-11-26 23:04:00.435791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:21.535 [2024-11-26 23:04:00.435798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.535 [2024-11-26 23:04:00.435803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:21.535 [2024-11-26 23:04:00.435810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:21.535 [2024-11-26 23:04:00.435815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:21.535 [2024-11-26 23:04:00.435822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:21.535 [2024-11-26 23:04:00.435827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:21.535 [2024-11-26 23:04:00.435833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:21.535 [2024-11-26 23:04:00.435840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:21.535 [2024-11-26 23:04:00.435848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:21.535 [2024-11-26 23:04:00.435857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:21.535 [2024-11-26 23:04:00.435864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:21.535 [2024-11-26 23:04:00.435869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:21.535 [2024-11-26 23:04:00.435876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:21.535 [2024-11-26 23:04:00.435881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:21.535 [2024-11-26 23:04:00.435890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:21.535 [2024-11-26 23:04:00.435895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:21.535 [2024-11-26 23:04:00.435902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:21.535 [2024-11-26 23:04:00.435907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:21.535 [2024-11-26 23:04:00.435914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:21.535 [2024-11-26 23:04:00.435919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:21.535 [2024-11-26 23:04:00.435932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:21.535 [2024-11-26 23:04:00.435937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:21.535 [2024-11-26 23:04:00.435945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:21.535 [2024-11-26 23:04:00.435950] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:21.535 [2024-11-26 23:04:00.435957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:21.535 [2024-11-26 23:04:00.435963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:21.535 [2024-11-26 23:04:00.435970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:21.535 [2024-11-26 23:04:00.435976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:21.535 [2024-11-26 23:04:00.435983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:21.535 [2024-11-26 23:04:00.435988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.535 [2024-11-26 23:04:00.435997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:21.535 [2024-11-26 23:04:00.436003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:21.535 [2024-11-26 23:04:00.436017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.535 [2024-11-26 23:04:00.436089] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:21.535 [2024-11-26 23:04:00.436098] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:24.068 [2024-11-26 23:04:02.759700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.068 [2024-11-26 23:04:02.759753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:24.068 [2024-11-26 23:04:02.759780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2323.602 ms 00:19:24.068 [2024-11-26 23:04:02.759791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.068 [2024-11-26 23:04:02.768239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.068 [2024-11-26 23:04:02.768282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.069 [2024-11-26 23:04:02.768308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.351 ms 00:19:24.069 [2024-11-26 23:04:02.768334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.768460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.768474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.069 [2024-11-26 23:04:02.768483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:24.069 [2024-11-26 23:04:02.768492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.788760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.789020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.069 [2024-11-26 23:04:02.789053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.231 ms 00:19:24.069 [2024-11-26 23:04:02.789072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.789238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.789263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.069 [2024-11-26 23:04:02.789279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:24.069 [2024-11-26 23:04:02.789323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.789763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.789803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.069 [2024-11-26 23:04:02.789822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:24.069 [2024-11-26 23:04:02.789844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.790070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.790095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.069 [2024-11-26 23:04:02.790110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:24.069 [2024-11-26 23:04:02.790127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.797033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.797066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.069 [2024-11-26 23:04:02.797075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:19:24.069 [2024-11-26 23:04:02.797085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.805343] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:24.069 [2024-11-26 23:04:02.820309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.820336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:24.069 [2024-11-26 23:04:02.820349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.125 ms 00:19:24.069 [2024-11-26 23:04:02.820358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.881546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.881586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:24.069 [2024-11-26 23:04:02.881603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.119 ms 00:19:24.069 [2024-11-26 23:04:02.881614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.881825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.881837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:24.069 [2024-11-26 23:04:02.881847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:24.069 [2024-11-26 23:04:02.881856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.884803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.884835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:24.069 [2024-11-26 23:04:02.884847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:19:24.069 [2024-11-26 23:04:02.884857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.887627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.887774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:24.069 [2024-11-26 23:04:02.887793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:19:24.069 [2024-11-26 23:04:02.887801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.888151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.888163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:24.069 [2024-11-26 23:04:02.888176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:19:24.069 [2024-11-26 23:04:02.888183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.918697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.918823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:24.069 [2024-11-26 23:04:02.918845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.477 ms 00:19:24.069 [2024-11-26 23:04:02.918855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.922893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.922923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:24.069 [2024-11-26 23:04:02.922934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.958 ms 00:19:24.069 [2024-11-26 23:04:02.922954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.926084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.926204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:24.069 [2024-11-26 23:04:02.926222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:19:24.069 [2024-11-26 23:04:02.926229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.929780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.929881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:24.069 [2024-11-26 23:04:02.929965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:19:24.069 [2024-11-26 23:04:02.929988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.930094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.930122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:24.069 [2024-11-26 23:04:02.930144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.069 [2024-11-26 23:04:02.930189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.930330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.069 [2024-11-26 23:04:02.930384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:24.069 [2024-11-26 23:04:02.930429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:24.069 [2024-11-26 23:04:02.930451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.069 [2024-11-26 23:04:02.931309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.069 [2024-11-26 23:04:02.932353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2506.349 ms, result 0 00:19:24.069 [2024-11-26 23:04:02.933129] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.069 { 00:19:24.069 "name": "ftl0", 00:19:24.069 "uuid": "e0e83087-eb0e-4185-b4d1-3f0b13ab70d0" 00:19:24.069 } 00:19:24.069 23:04:02 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:24.069 23:04:02 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:24.069 23:04:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:24.069 23:04:02 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:24.069 23:04:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:24.069 23:04:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:24.069 23:04:02 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:24.069 23:04:03 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:24.329 [ 00:19:24.329 { 00:19:24.329 "name": "ftl0", 00:19:24.329 "aliases": [ 00:19:24.329 "e0e83087-eb0e-4185-b4d1-3f0b13ab70d0" 00:19:24.329 ], 00:19:24.329 "product_name": "FTL disk", 00:19:24.329 "block_size": 4096, 00:19:24.329 "num_blocks": 23592960, 00:19:24.329 "uuid": "e0e83087-eb0e-4185-b4d1-3f0b13ab70d0", 00:19:24.329 "assigned_rate_limits": { 00:19:24.329 "rw_ios_per_sec": 0, 00:19:24.329 "rw_mbytes_per_sec": 0, 00:19:24.329 "r_mbytes_per_sec": 0, 00:19:24.329 "w_mbytes_per_sec": 0 00:19:24.329 }, 00:19:24.329 "claimed": false, 00:19:24.329 "zoned": false, 00:19:24.329 "supported_io_types": { 00:19:24.329 "read": true, 00:19:24.329 "write": true, 00:19:24.329 "unmap": true, 00:19:24.329 "flush": true, 00:19:24.329 "reset": false, 00:19:24.329 "nvme_admin": false, 00:19:24.329 "nvme_io": false, 00:19:24.329 "nvme_io_md": false, 00:19:24.329 "write_zeroes": true, 00:19:24.329 "zcopy": false, 00:19:24.329 "get_zone_info": false, 00:19:24.329 "zone_management": false, 00:19:24.329 "zone_append": false, 00:19:24.329 "compare": false, 00:19:24.329 "compare_and_write": false, 00:19:24.329 "abort": false, 00:19:24.329 "seek_hole": false, 00:19:24.329 "seek_data": false, 00:19:24.329 "copy": false, 00:19:24.329 "nvme_iov_md": false 00:19:24.329 }, 00:19:24.329 "driver_specific": { 00:19:24.329 "ftl": { 00:19:24.329 "base_bdev": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:24.329 "cache": "nvc0n1p0" 00:19:24.329 } 00:19:24.329 } 00:19:24.329 } 00:19:24.329 ] 00:19:24.329 23:04:03 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:24.329 23:04:03 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:24.329 23:04:03 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:24.587 23:04:03 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:24.587 23:04:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:24.847 23:04:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:24.847 { 00:19:24.847 "name": "ftl0", 00:19:24.847 "aliases": [ 00:19:24.847 "e0e83087-eb0e-4185-b4d1-3f0b13ab70d0" 00:19:24.847 ], 00:19:24.847 "product_name": "FTL disk", 00:19:24.847 "block_size": 4096, 00:19:24.847 "num_blocks": 23592960, 00:19:24.847 "uuid": "e0e83087-eb0e-4185-b4d1-3f0b13ab70d0", 00:19:24.847 "assigned_rate_limits": { 00:19:24.847 "rw_ios_per_sec": 0, 00:19:24.847 "rw_mbytes_per_sec": 0, 00:19:24.847 "r_mbytes_per_sec": 0, 00:19:24.847 "w_mbytes_per_sec": 0 00:19:24.847 }, 00:19:24.847 "claimed": false, 00:19:24.847 "zoned": false, 00:19:24.847 "supported_io_types": { 00:19:24.847 "read": true, 00:19:24.847 "write": true, 00:19:24.847 "unmap": true, 00:19:24.847 "flush": true, 00:19:24.847 "reset": false, 00:19:24.847 "nvme_admin": false, 00:19:24.847 "nvme_io": false, 00:19:24.847 "nvme_io_md": false, 00:19:24.847 "write_zeroes": true, 00:19:24.847 "zcopy": false, 00:19:24.847 "get_zone_info": false, 00:19:24.847 "zone_management": false, 00:19:24.847 "zone_append": false, 00:19:24.847 "compare": false, 00:19:24.847 "compare_and_write": false, 00:19:24.847 "abort": false, 00:19:24.847 "seek_hole": false, 00:19:24.847 "seek_data": false, 00:19:24.847 "copy": false, 00:19:24.847 "nvme_iov_md": false 00:19:24.847 }, 00:19:24.847 "driver_specific": { 00:19:24.847 "ftl": { 00:19:24.847 "base_bdev": "a5f9a025-2742-41d7-be3b-3930d9886864", 00:19:24.847 "cache": "nvc0n1p0" 00:19:24.847 } 00:19:24.847 } 00:19:24.847 } 00:19:24.847 ]' 00:19:24.847 23:04:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:24.847 23:04:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:24.847 23:04:03 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:24.847 [2024-11-26 23:04:03.961325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.847 [2024-11-26 23:04:03.961366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:24.847 [2024-11-26 23:04:03.961389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:24.847 [2024-11-26 23:04:03.961401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.847 [2024-11-26 23:04:03.961436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:24.847 [2024-11-26 23:04:03.961882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.847 [2024-11-26 23:04:03.961896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:24.847 [2024-11-26 23:04:03.961908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:19:24.847 [2024-11-26 23:04:03.961918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.847 [2024-11-26 23:04:03.962531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.847 [2024-11-26 23:04:03.962543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:24.847 [2024-11-26 23:04:03.962555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:19:24.847 [2024-11-26 23:04:03.962562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.847 [2024-11-26 23:04:03.966221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.847 [2024-11-26 23:04:03.966244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.847 [2024-11-26 23:04:03.966256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:19:24.847 [2024-11-26 23:04:03.966264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.973233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.973261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:25.108 [2024-11-26 23:04:03.973275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:19:25.108 [2024-11-26 23:04:03.973282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.974828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.974859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:25.108 [2024-11-26 23:04:03.974870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:19:25.108 [2024-11-26 23:04:03.974877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.979445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.979578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:25.108 [2024-11-26 23:04:03.979597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:19:25.108 [2024-11-26 23:04:03.979606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.979794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.979815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:25.108 [2024-11-26 23:04:03.979827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:25.108 [2024-11-26 23:04:03.979836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.981783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.981813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:25.108 [2024-11-26 23:04:03.981826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.911 ms 00:19:25.108 [2024-11-26 23:04:03.981833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.983276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.983399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:25.108 [2024-11-26 23:04:03.983416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:19:25.108 [2024-11-26 23:04:03.983423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.984562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.984585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:25.108 [2024-11-26 23:04:03.984596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:19:25.108 [2024-11-26 23:04:03.984602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.985554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.108 [2024-11-26 23:04:03.985583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:25.108 [2024-11-26 23:04:03.985594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:19:25.108 [2024-11-26 23:04:03.985601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.108 [2024-11-26 23:04:03.985656] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:25.109 [2024-11-26 23:04:03.985671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.985997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:25.109 [2024-11-26 23:04:03.986263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:25.110 [2024-11-26 23:04:03.986558] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:25.110 [2024-11-26 23:04:03.986582] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:19:25.110 [2024-11-26 23:04:03.986591] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:25.110 [2024-11-26 23:04:03.986602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:25.110 [2024-11-26 23:04:03.986609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:25.110 [2024-11-26 23:04:03.986629] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:25.110 [2024-11-26 23:04:03.986636] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:25.110 [2024-11-26 23:04:03.986645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:25.110 [2024-11-26 23:04:03.986652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:25.110 [2024-11-26 23:04:03.986660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:25.110 [2024-11-26 23:04:03.986667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:25.110 [2024-11-26 23:04:03.986676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.110 [2024-11-26 23:04:03.986691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:25.110 [2024-11-26 23:04:03.986703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:19:25.110 [2024-11-26 23:04:03.986710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:03.988727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.110 [2024-11-26 23:04:03.988842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:25.110 [2024-11-26 23:04:03.988913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:19:25.110 [2024-11-26 23:04:03.988937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:03.989040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.110 [2024-11-26 23:04:03.989066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:25.110 [2024-11-26 23:04:03.989121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:25.110 [2024-11-26 23:04:03.989145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:03.994566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.110 [2024-11-26 23:04:03.994699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.110 [2024-11-26 23:04:03.994808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.110 [2024-11-26 23:04:03.994832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:03.994956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.110 [2024-11-26 23:04:03.995016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.110 [2024-11-26 23:04:03.995120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.110 [2024-11-26 23:04:03.995143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:03.995286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.110 [2024-11-26 23:04:03.995355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.110 [2024-11-26 23:04:03.995410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.110 [2024-11-26 23:04:03.995433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:03.995478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.110 [2024-11-26 23:04:03.995506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.110 [2024-11-26 23:04:03.995528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.110 [2024-11-26 23:04:03.995573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:04.004924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.110 [2024-11-26 23:04:04.005055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.110 [2024-11-26 23:04:04.005134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.110 [2024-11-26 23:04:04.005158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.110 [2024-11-26 23:04:04.012986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.110 [2024-11-26 23:04:04.013113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.111 [2024-11-26 23:04:04.013168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.111 [2024-11-26 23:04:04.013190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.111 [2024-11-26 23:04:04.013255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.111 [2024-11-26 23:04:04.013282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.111 [2024-11-26 23:04:04.013408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.111 [2024-11-26 23:04:04.013464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.111 [2024-11-26 23:04:04.013541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.111 [2024-11-26 23:04:04.013585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.111 [2024-11-26 23:04:04.013647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.111 [2024-11-26 23:04:04.013658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.111 [2024-11-26 23:04:04.013746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.111 [2024-11-26 23:04:04.013758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.111 [2024-11-26 23:04:04.013768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.111 [2024-11-26 23:04:04.013775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.111 [2024-11-26 23:04:04.013830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.111 [2024-11-26 23:04:04.013840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:25.111 [2024-11-26 23:04:04.013851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.111 [2024-11-26 23:04:04.013858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.111 [2024-11-26 23:04:04.013900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.111 [2024-11-26 23:04:04.013910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.111 [2024-11-26 23:04:04.013922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.111 [2024-11-26 23:04:04.013929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.111 [2024-11-26 23:04:04.013997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.111 [2024-11-26 23:04:04.014008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.111 [2024-11-26 23:04:04.014018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.111 [2024-11-26 23:04:04.014025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.111 [2024-11-26 23:04:04.014221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.872 ms, result 0 00:19:25.111 true 00:19:25.111 23:04:04 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89199 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89199 ']' 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89199 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89199 00:19:25.111 killing process with pid 89199 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89199' 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89199 00:19:25.111 23:04:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89199 00:19:31.679 23:04:09 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:31.679 65536+0 records in 00:19:31.679 65536+0 records out 00:19:31.679 268435456 bytes (268 MB, 256 MiB) copied, 1.11176 s, 241 MB/s 00:19:31.679 23:04:10 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.679 [2024-11-26 23:04:10.711928] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:19:31.679 [2024-11-26 23:04:10.712065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89371 ] 00:19:31.940 [2024-11-26 23:04:10.848808] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:31.940 [2024-11-26 23:04:10.878864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.940 [2024-11-26 23:04:10.920346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.203 [2024-11-26 23:04:11.075203] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.203 [2024-11-26 23:04:11.075612] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.203 [2024-11-26 23:04:11.239311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.239552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.203 [2024-11-26 23:04:11.239581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:32.203 [2024-11-26 23:04:11.239591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.242376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.242429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.203 [2024-11-26 23:04:11.242445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:19:32.203 [2024-11-26 23:04:11.242453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.242564] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.203 [2024-11-26 23:04:11.242897] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.203 [2024-11-26 23:04:11.242917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.242926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.203 [2024-11-26 23:04:11.242936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:19:32.203 [2024-11-26 23:04:11.242945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.245313] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.203 [2024-11-26 23:04:11.248256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.248307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.203 [2024-11-26 23:04:11.248318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:19:32.203 [2024-11-26 23:04:11.248330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.248400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.248410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.203 [2024-11-26 23:04:11.248418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:32.203 [2024-11-26 23:04:11.248426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.255309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.255338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.203 [2024-11-26 23:04:11.255348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.842 ms 00:19:32.203 [2024-11-26 23:04:11.255363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.255477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.255488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.203 [2024-11-26 23:04:11.255496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:32.203 [2024-11-26 23:04:11.255505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.255536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.255546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.203 [2024-11-26 23:04:11.255554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:32.203 [2024-11-26 23:04:11.255561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.255583] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:32.203 [2024-11-26 23:04:11.257320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.257347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.203 [2024-11-26 23:04:11.257366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:19:32.203 [2024-11-26 23:04:11.257377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.257414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.257424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.203 [2024-11-26 23:04:11.257433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:32.203 [2024-11-26 23:04:11.257441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.257459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.203 [2024-11-26 23:04:11.257479] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:32.203 [2024-11-26 23:04:11.257517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.203 [2024-11-26 23:04:11.257540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:32.203 [2024-11-26 23:04:11.257646] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:32.203 [2024-11-26 23:04:11.257656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.203 [2024-11-26 23:04:11.257667] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:32.203 [2024-11-26 23:04:11.257677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.203 [2024-11-26 23:04:11.257685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.203 [2024-11-26 23:04:11.257693] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:32.203 [2024-11-26 23:04:11.257701] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.203 [2024-11-26 23:04:11.257710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:32.203 [2024-11-26 23:04:11.257719] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:32.203 [2024-11-26 23:04:11.257726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.257734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.203 [2024-11-26 23:04:11.257741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:19:32.203 [2024-11-26 23:04:11.257748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.257834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.203 [2024-11-26 23:04:11.257847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.203 [2024-11-26 23:04:11.257855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:32.203 [2024-11-26 23:04:11.257861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.203 [2024-11-26 23:04:11.257961] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.203 [2024-11-26 23:04:11.257970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.203 [2024-11-26 23:04:11.257978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.203 [2024-11-26 23:04:11.257986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.203 [2024-11-26 23:04:11.257999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.203 [2024-11-26 23:04:11.258006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.203 [2024-11-26 23:04:11.258015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:32.203 [2024-11-26 23:04:11.258022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.203 [2024-11-26 23:04:11.258029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:32.203 [2024-11-26 23:04:11.258035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.203 [2024-11-26 23:04:11.258042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.203 [2024-11-26 23:04:11.258051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:32.203 [2024-11-26 23:04:11.258057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.204 [2024-11-26 23:04:11.258064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.204 [2024-11-26 23:04:11.258070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:32.204 [2024-11-26 23:04:11.258077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.204 [2024-11-26 23:04:11.258089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:32.204 [2024-11-26 23:04:11.258096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.204 [2024-11-26 23:04:11.258110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.204 [2024-11-26 23:04:11.258128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.204 [2024-11-26 23:04:11.258135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.204 [2024-11-26 23:04:11.258148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.204 [2024-11-26 23:04:11.258154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.204 [2024-11-26 23:04:11.258168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.204 [2024-11-26 23:04:11.258174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.204 [2024-11-26 23:04:11.258186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.204 [2024-11-26 23:04:11.258193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.204 [2024-11-26 23:04:11.258206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.204 [2024-11-26 23:04:11.258213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:32.204 [2024-11-26 23:04:11.258219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.204 [2024-11-26 23:04:11.258226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:32.204 [2024-11-26 23:04:11.258234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:32.204 [2024-11-26 23:04:11.258241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:32.204 [2024-11-26 23:04:11.258254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:32.204 [2024-11-26 23:04:11.258260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258268] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.204 [2024-11-26 23:04:11.258276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.204 [2024-11-26 23:04:11.258283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.204 [2024-11-26 23:04:11.258291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.204 [2024-11-26 23:04:11.258323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.204 [2024-11-26 23:04:11.258331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.204 [2024-11-26 23:04:11.258338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.204 [2024-11-26 23:04:11.258345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.204 [2024-11-26 23:04:11.258352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.204 [2024-11-26 23:04:11.258358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.204 [2024-11-26 23:04:11.258367] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.204 [2024-11-26 23:04:11.258378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.204 [2024-11-26 23:04:11.258389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:32.204 [2024-11-26 23:04:11.258397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:32.204 [2024-11-26 23:04:11.258405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:32.204 [2024-11-26 23:04:11.258412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:32.204 [2024-11-26 23:04:11.258419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:32.204 [2024-11-26 23:04:11.258426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:32.204 [2024-11-26 23:04:11.258434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:32.204 [2024-11-26 23:04:11.258441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:32.204 [2024-11-26 23:04:11.258448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:32.204 [2024-11-26 23:04:11.258456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:32.204 [2024-11-26 23:04:11.258464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:32.204 [2024-11-26 23:04:11.258471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:32.204 [2024-11-26 23:04:11.258479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:32.204 [2024-11-26 23:04:11.258487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:32.204 [2024-11-26 23:04:11.258494] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.204 [2024-11-26 23:04:11.258504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.204 [2024-11-26 23:04:11.258512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.204 [2024-11-26 23:04:11.258520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.204 [2024-11-26 23:04:11.258527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.204 [2024-11-26 23:04:11.258534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.204 [2024-11-26 23:04:11.258542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.258553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.204 [2024-11-26 23:04:11.258561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:19:32.204 [2024-11-26 23:04:11.258568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.270904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.270942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.204 [2024-11-26 23:04:11.270954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:19:32.204 [2024-11-26 23:04:11.270963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.271094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.271106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.204 [2024-11-26 23:04:11.271115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:32.204 [2024-11-26 23:04:11.271123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.292592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.292653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.204 [2024-11-26 23:04:11.292670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.441 ms 00:19:32.204 [2024-11-26 23:04:11.292689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.292796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.292814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.204 [2024-11-26 23:04:11.292834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:32.204 [2024-11-26 23:04:11.292848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.293362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.293446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.204 [2024-11-26 23:04:11.293464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:19:32.204 [2024-11-26 23:04:11.293475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.293675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.293694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.204 [2024-11-26 23:04:11.293705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:19:32.204 [2024-11-26 23:04:11.293716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.302088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.302138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.204 [2024-11-26 23:04:11.302152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.342 ms 00:19:32.204 [2024-11-26 23:04:11.302169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.204 [2024-11-26 23:04:11.305667] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:32.204 [2024-11-26 23:04:11.305706] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.204 [2024-11-26 23:04:11.305718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.204 [2024-11-26 23:04:11.305726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.205 [2024-11-26 23:04:11.305735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:19:32.205 [2024-11-26 23:04:11.305742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.205 [2024-11-26 23:04:11.321151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.205 [2024-11-26 23:04:11.321283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.205 [2024-11-26 23:04:11.321315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.362 ms 00:19:32.205 [2024-11-26 23:04:11.321329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.205 [2024-11-26 23:04:11.323686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.205 [2024-11-26 23:04:11.323722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.205 [2024-11-26 23:04:11.323732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:19:32.205 [2024-11-26 23:04:11.323740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.205 [2024-11-26 23:04:11.325460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.205 [2024-11-26 23:04:11.325493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.205 [2024-11-26 23:04:11.325503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:19:32.205 [2024-11-26 23:04:11.325511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.205 [2024-11-26 23:04:11.325847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.205 [2024-11-26 23:04:11.325866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.205 [2024-11-26 23:04:11.325875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:32.205 [2024-11-26 23:04:11.325887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.466 [2024-11-26 23:04:11.346346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.466 [2024-11-26 23:04:11.346400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.466 [2024-11-26 23:04:11.346414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.437 ms 00:19:32.466 [2024-11-26 23:04:11.346422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.466 [2024-11-26 23:04:11.354358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:32.466 [2024-11-26 23:04:11.372290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.466 [2024-11-26 23:04:11.372353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.466 [2024-11-26 23:04:11.372366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.782 ms 00:19:32.467 [2024-11-26 23:04:11.372374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.467 [2024-11-26 23:04:11.372464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.467 [2024-11-26 23:04:11.372476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.467 [2024-11-26 23:04:11.372488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:32.467 [2024-11-26 23:04:11.372499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.467 [2024-11-26 23:04:11.372556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.467 [2024-11-26 23:04:11.372565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.467 [2024-11-26 23:04:11.372574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:32.467 [2024-11-26 23:04:11.372581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.467 [2024-11-26 23:04:11.372610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.467 [2024-11-26 23:04:11.372619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.467 [2024-11-26 23:04:11.372627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:32.467 [2024-11-26 23:04:11.372638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.467 [2024-11-26 23:04:11.372676] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.467 [2024-11-26 23:04:11.372690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.467 [2024-11-26 23:04:11.372698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.467 [2024-11-26 23:04:11.372706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:32.467 [2024-11-26 23:04:11.372714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.467 [2024-11-26 23:04:11.377952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.467 [2024-11-26 23:04:11.377991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.467 [2024-11-26 23:04:11.378002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.214 ms 00:19:32.467 [2024-11-26 23:04:11.378017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.467 [2024-11-26 23:04:11.378106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.467 [2024-11-26 23:04:11.378117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.467 [2024-11-26 23:04:11.378126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:32.467 [2024-11-26 23:04:11.378134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.467 [2024-11-26 23:04:11.379094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.467 [2024-11-26 23:04:11.380237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.509 ms, result 0 00:19:32.467 [2024-11-26 23:04:11.381423] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.467 [2024-11-26 23:04:11.389017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.412  [2024-11-26T23:04:13.482Z] Copying: 17/256 [MB] (17 MBps) [2024-11-26T23:04:14.429Z] Copying: 29/256 [MB] (11 MBps) [2024-11-26T23:04:15.818Z] Copying: 46/256 [MB] (17 MBps) [2024-11-26T23:04:16.392Z] Copying: 62/256 [MB] (15 MBps) [2024-11-26T23:04:17.779Z] Copying: 78/256 [MB] (16 MBps) [2024-11-26T23:04:18.722Z] Copying: 91/256 [MB] (12 MBps) [2024-11-26T23:04:19.668Z] Copying: 107/256 [MB] (16 MBps) [2024-11-26T23:04:20.667Z] Copying: 121/256 [MB] (14 MBps) [2024-11-26T23:04:21.613Z] Copying: 134/256 [MB] (12 MBps) [2024-11-26T23:04:22.557Z] Copying: 148/256 [MB] (13 MBps) [2024-11-26T23:04:23.503Z] Copying: 160/256 [MB] (12 MBps) [2024-11-26T23:04:24.448Z] Copying: 173/256 [MB] (12 MBps) [2024-11-26T23:04:25.391Z] Copying: 189/256 [MB] (15 MBps) [2024-11-26T23:04:26.778Z] Copying: 201/256 [MB] (12 MBps) [2024-11-26T23:04:27.722Z] Copying: 213/256 [MB] (12 MBps) [2024-11-26T23:04:28.666Z] Copying: 224/256 [MB] (11 MBps) [2024-11-26T23:04:29.610Z] Copying: 239/256 [MB] (14 MBps) [2024-11-26T23:04:30.185Z] Copying: 250/256 [MB] (11 MBps) [2024-11-26T23:04:30.185Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-26 23:04:29.871976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.058 [2024-11-26 23:04:29.873062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.058 [2024-11-26 23:04:29.873089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.058 [2024-11-26 23:04:29.873101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.058 [2024-11-26 23:04:29.873108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.058 [2024-11-26 23:04:29.873131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.058 [2024-11-26 23:04:29.873572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.058 [2024-11-26 23:04:29.873603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.058 [2024-11-26 23:04:29.873612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:19:51.058 [2024-11-26 23:04:29.873625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.058 [2024-11-26 23:04:29.876544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.058 [2024-11-26 23:04:29.876575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.058 [2024-11-26 23:04:29.876590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:19:51.058 [2024-11-26 23:04:29.876597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.058 [2024-11-26 23:04:29.884509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.058 [2024-11-26 23:04:29.884548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.058 [2024-11-26 23:04:29.884557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.896 ms 00:19:51.058 [2024-11-26 23:04:29.884564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.058 [2024-11-26 23:04:29.891480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.058 [2024-11-26 23:04:29.891505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.059 [2024-11-26 23:04:29.891514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.885 ms 00:19:51.059 [2024-11-26 23:04:29.891526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.893681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.059 [2024-11-26 23:04:29.893723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.059 [2024-11-26 23:04:29.893732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:19:51.059 [2024-11-26 23:04:29.893740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.897593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.059 [2024-11-26 23:04:29.897630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.059 [2024-11-26 23:04:29.897639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.823 ms 00:19:51.059 [2024-11-26 23:04:29.897646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.897764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.059 [2024-11-26 23:04:29.897773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.059 [2024-11-26 23:04:29.897788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:51.059 [2024-11-26 23:04:29.897798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.900645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.059 [2024-11-26 23:04:29.900674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.059 [2024-11-26 23:04:29.900684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:19:51.059 [2024-11-26 23:04:29.900691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.902929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.059 [2024-11-26 23:04:29.903062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.059 [2024-11-26 23:04:29.903075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.208 ms 00:19:51.059 [2024-11-26 23:04:29.903082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.904703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.059 [2024-11-26 23:04:29.904729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.059 [2024-11-26 23:04:29.904738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:19:51.059 [2024-11-26 23:04:29.904744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.906498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.059 [2024-11-26 23:04:29.906528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.059 [2024-11-26 23:04:29.906537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:19:51.059 [2024-11-26 23:04:29.906544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.059 [2024-11-26 23:04:29.906573] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.059 [2024-11-26 23:04:29.906587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.906994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.059 [2024-11-26 23:04:29.907077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.060 [2024-11-26 23:04:29.907378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.060 [2024-11-26 23:04:29.907386] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:19:51.060 [2024-11-26 23:04:29.907394] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.060 [2024-11-26 23:04:29.907401] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.060 [2024-11-26 23:04:29.907408] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.060 [2024-11-26 23:04:29.907415] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.060 [2024-11-26 23:04:29.907425] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.060 [2024-11-26 23:04:29.907433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.060 [2024-11-26 23:04:29.907443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.060 [2024-11-26 23:04:29.907450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.060 [2024-11-26 23:04:29.907456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.060 [2024-11-26 23:04:29.907463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.060 [2024-11-26 23:04:29.907473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.060 [2024-11-26 23:04:29.907481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:19:51.060 [2024-11-26 23:04:29.907489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.908970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.060 [2024-11-26 23:04:29.908992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.060 [2024-11-26 23:04:29.909001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:19:51.060 [2024-11-26 23:04:29.909008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.909091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.060 [2024-11-26 23:04:29.909101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.060 [2024-11-26 23:04:29.909110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:51.060 [2024-11-26 23:04:29.909117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.914504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.914627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.060 [2024-11-26 23:04:29.914691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.914722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.914805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.914829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.060 [2024-11-26 23:04:29.914848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.914866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.914918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.914942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.060 [2024-11-26 23:04:29.914962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.915036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.915073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.915094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.060 [2024-11-26 23:04:29.915113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.915132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.924362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.924507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.060 [2024-11-26 23:04:29.924557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.924583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.932092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.932230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.060 [2024-11-26 23:04:29.932279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.932321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.932381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.932407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.060 [2024-11-26 23:04:29.932427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.932448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.932514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.932541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.060 [2024-11-26 23:04:29.932562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.932624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.932717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.932780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.060 [2024-11-26 23:04:29.933152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.933246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.933340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.933354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.060 [2024-11-26 23:04:29.933370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.933379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.060 [2024-11-26 23:04:29.933430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.060 [2024-11-26 23:04:29.933440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.060 [2024-11-26 23:04:29.933448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.060 [2024-11-26 23:04:29.933456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.061 [2024-11-26 23:04:29.933498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.061 [2024-11-26 23:04:29.933511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.061 [2024-11-26 23:04:29.933524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.061 [2024-11-26 23:04:29.933531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.061 [2024-11-26 23:04:29.933664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.583 ms, result 0 00:19:51.322 00:19:51.322 00:19:51.322 23:04:30 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89573 00:19:51.322 23:04:30 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89573 00:19:51.322 23:04:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89573 ']' 00:19:51.322 23:04:30 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:51.322 23:04:30 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.323 23:04:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.323 23:04:30 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.323 23:04:30 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.323 23:04:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:51.323 [2024-11-26 23:04:30.385323] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:19:51.323 [2024-11-26 23:04:30.385505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89573 ] 00:19:51.591 [2024-11-26 23:04:30.525722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:51.591 [2024-11-26 23:04:30.553356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.591 [2024-11-26 23:04:30.581438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.166 23:04:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.166 23:04:31 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:52.167 23:04:31 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:52.432 [2024-11-26 23:04:31.453101] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.432 [2024-11-26 23:04:31.453182] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.695 [2024-11-26 23:04:31.627274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.627347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.695 [2024-11-26 23:04:31.627369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.695 [2024-11-26 23:04:31.627378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.629958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.630008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.695 [2024-11-26 23:04:31.630024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:19:52.695 [2024-11-26 23:04:31.630032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.630134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.695 [2024-11-26 23:04:31.630441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.695 [2024-11-26 23:04:31.630464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.630473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.695 [2024-11-26 23:04:31.630485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:52.695 [2024-11-26 23:04:31.630494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.632384] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.695 [2024-11-26 23:04:31.636386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.636437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.695 [2024-11-26 23:04:31.636449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.009 ms 00:19:52.695 [2024-11-26 23:04:31.636459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.636545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.636561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.695 [2024-11-26 23:04:31.636570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:52.695 [2024-11-26 23:04:31.636588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.644716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.644765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.695 [2024-11-26 23:04:31.644776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.074 ms 00:19:52.695 [2024-11-26 23:04:31.644787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.644903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.644921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.695 [2024-11-26 23:04:31.644933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:52.695 [2024-11-26 23:04:31.644945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.644976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.644986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.695 [2024-11-26 23:04:31.644995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:52.695 [2024-11-26 23:04:31.645005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.645030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:52.695 [2024-11-26 23:04:31.647136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.647178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.695 [2024-11-26 23:04:31.647190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:19:52.695 [2024-11-26 23:04:31.647199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.647240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.647249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.695 [2024-11-26 23:04:31.647259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:52.695 [2024-11-26 23:04:31.647267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.695 [2024-11-26 23:04:31.647315] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.695 [2024-11-26 23:04:31.647342] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.695 [2024-11-26 23:04:31.647393] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.695 [2024-11-26 23:04:31.647410] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:52.695 [2024-11-26 23:04:31.647532] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.695 [2024-11-26 23:04:31.647545] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.695 [2024-11-26 23:04:31.647559] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.695 [2024-11-26 23:04:31.647570] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.695 [2024-11-26 23:04:31.647588] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.695 [2024-11-26 23:04:31.647597] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:52.695 [2024-11-26 23:04:31.647608] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.695 [2024-11-26 23:04:31.647618] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.695 [2024-11-26 23:04:31.647627] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.695 [2024-11-26 23:04:31.647635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.695 [2024-11-26 23:04:31.647647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.696 [2024-11-26 23:04:31.647657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:19:52.696 [2024-11-26 23:04:31.647665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.696 [2024-11-26 23:04:31.647754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.696 [2024-11-26 23:04:31.647766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.696 [2024-11-26 23:04:31.647775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:52.696 [2024-11-26 23:04:31.647785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.696 [2024-11-26 23:04:31.647891] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.696 [2024-11-26 23:04:31.647905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.696 [2024-11-26 23:04:31.647917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.696 [2024-11-26 23:04:31.647937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.696 [2024-11-26 23:04:31.647948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.696 [2024-11-26 23:04:31.647958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.696 [2024-11-26 23:04:31.647967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:52.696 [2024-11-26 23:04:31.647977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.696 [2024-11-26 23:04:31.647993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.696 [2024-11-26 23:04:31.648012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.696 [2024-11-26 23:04:31.648023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:52.696 [2024-11-26 23:04:31.648032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.696 [2024-11-26 23:04:31.648043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.696 [2024-11-26 23:04:31.648052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:52.696 [2024-11-26 23:04:31.648062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.696 [2024-11-26 23:04:31.648080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:52.696 [2024-11-26 23:04:31.648087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.696 [2024-11-26 23:04:31.648110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.696 [2024-11-26 23:04:31.648127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.696 [2024-11-26 23:04:31.648139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.696 [2024-11-26 23:04:31.648160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.696 [2024-11-26 23:04:31.648168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.696 [2024-11-26 23:04:31.648187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.696 [2024-11-26 23:04:31.648196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.696 [2024-11-26 23:04:31.648212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.696 [2024-11-26 23:04:31.648220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.696 [2024-11-26 23:04:31.648236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.696 [2024-11-26 23:04:31.648246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:52.696 [2024-11-26 23:04:31.648255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.696 [2024-11-26 23:04:31.648263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.696 [2024-11-26 23:04:31.648270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:52.696 [2024-11-26 23:04:31.648278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.696 [2024-11-26 23:04:31.648308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:52.696 [2024-11-26 23:04:31.648316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648325] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.696 [2024-11-26 23:04:31.648332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.696 [2024-11-26 23:04:31.648342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.696 [2024-11-26 23:04:31.648350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.696 [2024-11-26 23:04:31.648361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.696 [2024-11-26 23:04:31.648369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.696 [2024-11-26 23:04:31.648378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.696 [2024-11-26 23:04:31.648385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.696 [2024-11-26 23:04:31.648411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.696 [2024-11-26 23:04:31.648419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.696 [2024-11-26 23:04:31.648432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.696 [2024-11-26 23:04:31.648445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.696 [2024-11-26 23:04:31.648457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:52.696 [2024-11-26 23:04:31.648468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:52.696 [2024-11-26 23:04:31.648479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:52.696 [2024-11-26 23:04:31.648487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:52.696 [2024-11-26 23:04:31.648498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:52.696 [2024-11-26 23:04:31.648506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:52.696 [2024-11-26 23:04:31.648515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:52.696 [2024-11-26 23:04:31.648522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:52.696 [2024-11-26 23:04:31.648533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:52.696 [2024-11-26 23:04:31.648540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:52.696 [2024-11-26 23:04:31.648549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:52.696 [2024-11-26 23:04:31.648556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:52.696 [2024-11-26 23:04:31.648568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:52.696 [2024-11-26 23:04:31.648576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:52.696 [2024-11-26 23:04:31.648585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.696 [2024-11-26 23:04:31.648593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.696 [2024-11-26 23:04:31.648605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.696 [2024-11-26 23:04:31.648613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.696 [2024-11-26 23:04:31.648622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.696 [2024-11-26 23:04:31.648630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.696 [2024-11-26 23:04:31.648640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.696 [2024-11-26 23:04:31.648648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.696 [2024-11-26 23:04:31.648659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:19:52.696 [2024-11-26 23:04:31.648667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.696 [2024-11-26 23:04:31.662625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.696 [2024-11-26 23:04:31.662850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.696 [2024-11-26 23:04:31.662880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.896 ms 00:19:52.696 [2024-11-26 23:04:31.662892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.696 [2024-11-26 23:04:31.663035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.696 [2024-11-26 23:04:31.663048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.696 [2024-11-26 23:04:31.663059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:52.696 [2024-11-26 23:04:31.663067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.696 [2024-11-26 23:04:31.675592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.696 [2024-11-26 23:04:31.675638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.696 [2024-11-26 23:04:31.675655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:19:52.696 [2024-11-26 23:04:31.675667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.696 [2024-11-26 23:04:31.675741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.696 [2024-11-26 23:04:31.675752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.696 [2024-11-26 23:04:31.675768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.697 [2024-11-26 23:04:31.675777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.676279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.676335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.697 [2024-11-26 23:04:31.676350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:19:52.697 [2024-11-26 23:04:31.676362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.676519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.676530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.697 [2024-11-26 23:04:31.676541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:52.697 [2024-11-26 23:04:31.676552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.684901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.684947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.697 [2024-11-26 23:04:31.684961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.321 ms 00:19:52.697 [2024-11-26 23:04:31.684975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.697596] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:52.697 [2024-11-26 23:04:31.697842] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.697 [2024-11-26 23:04:31.697875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.697889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.697 [2024-11-26 23:04:31.697906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.784 ms 00:19:52.697 [2024-11-26 23:04:31.697917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.716311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.716515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.697 [2024-11-26 23:04:31.716544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.313 ms 00:19:52.697 [2024-11-26 23:04:31.716559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.719979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.720161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.697 [2024-11-26 23:04:31.720185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.172 ms 00:19:52.697 [2024-11-26 23:04:31.720193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.722877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.722923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.697 [2024-11-26 23:04:31.722937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:19:52.697 [2024-11-26 23:04:31.722944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.723431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.723474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.697 [2024-11-26 23:04:31.723501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:19:52.697 [2024-11-26 23:04:31.723523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.750808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.751024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.697 [2024-11-26 23:04:31.751053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.056 ms 00:19:52.697 [2024-11-26 23:04:31.751066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.759448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:52.697 [2024-11-26 23:04:31.778421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.778476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.697 [2024-11-26 23:04:31.778489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.266 ms 00:19:52.697 [2024-11-26 23:04:31.778500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.778597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.778610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.697 [2024-11-26 23:04:31.778620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:52.697 [2024-11-26 23:04:31.778630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.778718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.778732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.697 [2024-11-26 23:04:31.778741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:52.697 [2024-11-26 23:04:31.778751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.778777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.778795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.697 [2024-11-26 23:04:31.778803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.697 [2024-11-26 23:04:31.778814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.778851] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.697 [2024-11-26 23:04:31.778864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.778873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.697 [2024-11-26 23:04:31.778883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:52.697 [2024-11-26 23:04:31.778891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.784907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.784958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.697 [2024-11-26 23:04:31.784975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.987 ms 00:19:52.697 [2024-11-26 23:04:31.784983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.785080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.697 [2024-11-26 23:04:31.785090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.697 [2024-11-26 23:04:31.785102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:52.697 [2024-11-26 23:04:31.785111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.697 [2024-11-26 23:04:31.786232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.697 [2024-11-26 23:04:31.787635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.618 ms, result 0 00:19:52.697 [2024-11-26 23:04:31.789752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.697 Some configs were skipped because the RPC state that can call them passed over. 00:19:52.959 23:04:31 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:52.959 [2024-11-26 23:04:32.043035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.959 [2024-11-26 23:04:32.043213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:52.959 [2024-11-26 23:04:32.043235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.272 ms 00:19:52.959 [2024-11-26 23:04:32.043245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.959 [2024-11-26 23:04:32.043286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.526 ms, result 0 00:19:52.959 true 00:19:52.959 23:04:32 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:53.221 [2024-11-26 23:04:32.262733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.221 [2024-11-26 23:04:32.262784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:53.221 [2024-11-26 23:04:32.262797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:19:53.221 [2024-11-26 23:04:32.262806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.221 [2024-11-26 23:04:32.262844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.855 ms, result 0 00:19:53.221 true 00:19:53.221 23:04:32 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89573 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89573 ']' 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89573 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89573 00:19:53.221 killing process with pid 89573 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89573' 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89573 00:19:53.221 23:04:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89573 00:19:53.485 [2024-11-26 23:04:32.445680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.445743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.485 [2024-11-26 23:04:32.445757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:53.485 [2024-11-26 23:04:32.445768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.445792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.485 [2024-11-26 23:04:32.446325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.446360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.485 [2024-11-26 23:04:32.446373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:19:53.485 [2024-11-26 23:04:32.446382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.446665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.446699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.485 [2024-11-26 23:04:32.446711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:53.485 [2024-11-26 23:04:32.446720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.451462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.451499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.485 [2024-11-26 23:04:32.451513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.703 ms 00:19:53.485 [2024-11-26 23:04:32.451521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.458557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.458592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.485 [2024-11-26 23:04:32.458610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.994 ms 00:19:53.485 [2024-11-26 23:04:32.458623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.461272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.461482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.485 [2024-11-26 23:04:32.461504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.557 ms 00:19:53.485 [2024-11-26 23:04:32.461511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.466748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.466789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.485 [2024-11-26 23:04:32.466804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.192 ms 00:19:53.485 [2024-11-26 23:04:32.466812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.466959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.466972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.485 [2024-11-26 23:04:32.466982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:53.485 [2024-11-26 23:04:32.466990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.470381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.470419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.485 [2024-11-26 23:04:32.470433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:19:53.485 [2024-11-26 23:04:32.470440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.473054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.473205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.485 [2024-11-26 23:04:32.473227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:19:53.485 [2024-11-26 23:04:32.473234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.475605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.475643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.485 [2024-11-26 23:04:32.475655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:19:53.485 [2024-11-26 23:04:32.475662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.477760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.485 [2024-11-26 23:04:32.477816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.485 [2024-11-26 23:04:32.477829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:19:53.485 [2024-11-26 23:04:32.477836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.485 [2024-11-26 23:04:32.477878] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.485 [2024-11-26 23:04:32.477893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.477998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.485 [2024-11-26 23:04:32.478216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.486 [2024-11-26 23:04:32.478835] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.486 [2024-11-26 23:04:32.478849] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:19:53.486 [2024-11-26 23:04:32.478857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.486 [2024-11-26 23:04:32.478866] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.486 [2024-11-26 23:04:32.478874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.486 [2024-11-26 23:04:32.478885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.486 [2024-11-26 23:04:32.478895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.486 [2024-11-26 23:04:32.478905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.486 [2024-11-26 23:04:32.478914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.486 [2024-11-26 23:04:32.478922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.486 [2024-11-26 23:04:32.478928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.486 [2024-11-26 23:04:32.478937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.486 [2024-11-26 23:04:32.478946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.486 [2024-11-26 23:04:32.478958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:19:53.486 [2024-11-26 23:04:32.478965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.486 [2024-11-26 23:04:32.481186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.486 [2024-11-26 23:04:32.481245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.486 [2024-11-26 23:04:32.481274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:19:53.486 [2024-11-26 23:04:32.481314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.486 [2024-11-26 23:04:32.481515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.486 [2024-11-26 23:04:32.481542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.486 [2024-11-26 23:04:32.481567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:53.486 [2024-11-26 23:04:32.481590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.486 [2024-11-26 23:04:32.488614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.486 [2024-11-26 23:04:32.488763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.486 [2024-11-26 23:04:32.488823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.486 [2024-11-26 23:04:32.488846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.486 [2024-11-26 23:04:32.488945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.486 [2024-11-26 23:04:32.488970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.486 [2024-11-26 23:04:32.488995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.486 [2024-11-26 23:04:32.489017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.486 [2024-11-26 23:04:32.489092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.489157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.487 [2024-11-26 23:04:32.489180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.489205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.489240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.489261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.487 [2024-11-26 23:04:32.489284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.489424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.502097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.502278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.487 [2024-11-26 23:04:32.502364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.502388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.511936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.512101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.487 [2024-11-26 23:04:32.512161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.512187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.512254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.512279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.487 [2024-11-26 23:04:32.512326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.512349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.512396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.512477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.487 [2024-11-26 23:04:32.512505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.512524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.512638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.512665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.487 [2024-11-26 23:04:32.512687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.512707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.512810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.512839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.487 [2024-11-26 23:04:32.512865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.512885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.513091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.513132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.487 [2024-11-26 23:04:32.513155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.513178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.513241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.487 [2024-11-26 23:04:32.513319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.487 [2024-11-26 23:04:32.513347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.487 [2024-11-26 23:04:32.513368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.487 [2024-11-26 23:04:32.513533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.818 ms, result 0 00:19:53.749 23:04:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:53.749 23:04:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.749 [2024-11-26 23:04:32.805793] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:19:53.749 [2024-11-26 23:04:32.805940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89609 ] 00:19:54.009 [2024-11-26 23:04:32.941240] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:54.009 [2024-11-26 23:04:32.972827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.009 [2024-11-26 23:04:33.002456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.009 [2024-11-26 23:04:33.120702] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.009 [2024-11-26 23:04:33.120791] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.270 [2024-11-26 23:04:33.281796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.270 [2024-11-26 23:04:33.282033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.270 [2024-11-26 23:04:33.282066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:54.270 [2024-11-26 23:04:33.282075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.284755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.284805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.271 [2024-11-26 23:04:33.284816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.650 ms 00:19:54.271 [2024-11-26 23:04:33.284828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.284939] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.271 [2024-11-26 23:04:33.285208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.271 [2024-11-26 23:04:33.285225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.285238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.271 [2024-11-26 23:04:33.285248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:54.271 [2024-11-26 23:04:33.285257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.287191] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:54.271 [2024-11-26 23:04:33.291107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.291289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:54.271 [2024-11-26 23:04:33.291491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.918 ms 00:19:54.271 [2024-11-26 23:04:33.291507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.291589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.291600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:54.271 [2024-11-26 23:04:33.291609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:54.271 [2024-11-26 23:04:33.291617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.299642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.299692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.271 [2024-11-26 23:04:33.299704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.976 ms 00:19:54.271 [2024-11-26 23:04:33.299715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.299860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.299875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.271 [2024-11-26 23:04:33.299890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:54.271 [2024-11-26 23:04:33.299902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.299932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.299945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.271 [2024-11-26 23:04:33.299955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:54.271 [2024-11-26 23:04:33.299963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.299985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:54.271 [2024-11-26 23:04:33.302074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.302118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.271 [2024-11-26 23:04:33.302132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:19:54.271 [2024-11-26 23:04:33.302144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.302186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.302200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.271 [2024-11-26 23:04:33.302209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:54.271 [2024-11-26 23:04:33.302217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.302236] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:54.271 [2024-11-26 23:04:33.302257] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:54.271 [2024-11-26 23:04:33.302313] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:54.271 [2024-11-26 23:04:33.302335] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:54.271 [2024-11-26 23:04:33.302439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:54.271 [2024-11-26 23:04:33.302451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.271 [2024-11-26 23:04:33.302464] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:54.271 [2024-11-26 23:04:33.302475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.271 [2024-11-26 23:04:33.302485] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.271 [2024-11-26 23:04:33.302502] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:54.271 [2024-11-26 23:04:33.302510] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.271 [2024-11-26 23:04:33.302520] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:54.271 [2024-11-26 23:04:33.302531] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:54.271 [2024-11-26 23:04:33.302542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.302549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.271 [2024-11-26 23:04:33.302559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:19:54.271 [2024-11-26 23:04:33.302568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.302656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.271 [2024-11-26 23:04:33.302671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.271 [2024-11-26 23:04:33.302710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:54.271 [2024-11-26 23:04:33.302719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.271 [2024-11-26 23:04:33.302822] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.271 [2024-11-26 23:04:33.302837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.271 [2024-11-26 23:04:33.302846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.271 [2024-11-26 23:04:33.302857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.271 [2024-11-26 23:04:33.302874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.271 [2024-11-26 23:04:33.302883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.271 [2024-11-26 23:04:33.302894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:54.271 [2024-11-26 23:04:33.302903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.271 [2024-11-26 23:04:33.302912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:54.271 [2024-11-26 23:04:33.302920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.271 [2024-11-26 23:04:33.302928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.271 [2024-11-26 23:04:33.302938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:54.271 [2024-11-26 23:04:33.302946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.271 [2024-11-26 23:04:33.302954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.271 [2024-11-26 23:04:33.302962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:54.271 [2024-11-26 23:04:33.302971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.271 [2024-11-26 23:04:33.302979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.271 [2024-11-26 23:04:33.302987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:54.271 [2024-11-26 23:04:33.302996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.272 [2024-11-26 23:04:33.303017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.272 [2024-11-26 23:04:33.303043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.272 [2024-11-26 23:04:33.303053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.272 [2024-11-26 23:04:33.303075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.272 [2024-11-26 23:04:33.303083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.272 [2024-11-26 23:04:33.303098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.272 [2024-11-26 23:04:33.303105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.272 [2024-11-26 23:04:33.303119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.272 [2024-11-26 23:04:33.303127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.272 [2024-11-26 23:04:33.303140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.272 [2024-11-26 23:04:33.303146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:54.272 [2024-11-26 23:04:33.303154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.272 [2024-11-26 23:04:33.303162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:54.272 [2024-11-26 23:04:33.303171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:54.272 [2024-11-26 23:04:33.303178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:54.272 [2024-11-26 23:04:33.303196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:54.272 [2024-11-26 23:04:33.303203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303210] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.272 [2024-11-26 23:04:33.303218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.272 [2024-11-26 23:04:33.303227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.272 [2024-11-26 23:04:33.303238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.272 [2024-11-26 23:04:33.303251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.272 [2024-11-26 23:04:33.303258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.272 [2024-11-26 23:04:33.303266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.272 [2024-11-26 23:04:33.303273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.272 [2024-11-26 23:04:33.303282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.272 [2024-11-26 23:04:33.303289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.272 [2024-11-26 23:04:33.303314] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.272 [2024-11-26 23:04:33.303327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.272 [2024-11-26 23:04:33.303339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:54.272 [2024-11-26 23:04:33.303349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:54.272 [2024-11-26 23:04:33.303357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:54.272 [2024-11-26 23:04:33.303365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:54.272 [2024-11-26 23:04:33.303373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:54.272 [2024-11-26 23:04:33.303381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:54.272 [2024-11-26 23:04:33.303391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:54.272 [2024-11-26 23:04:33.303399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:54.272 [2024-11-26 23:04:33.303408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:54.272 [2024-11-26 23:04:33.303415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:54.272 [2024-11-26 23:04:33.303425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:54.272 [2024-11-26 23:04:33.303434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:54.272 [2024-11-26 23:04:33.303443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:54.272 [2024-11-26 23:04:33.303451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:54.272 [2024-11-26 23:04:33.303460] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.272 [2024-11-26 23:04:33.303476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.272 [2024-11-26 23:04:33.303485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.272 [2024-11-26 23:04:33.303492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.272 [2024-11-26 23:04:33.303501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.272 [2024-11-26 23:04:33.303509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.272 [2024-11-26 23:04:33.303516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.272 [2024-11-26 23:04:33.303524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.272 [2024-11-26 23:04:33.303532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:19:54.272 [2024-11-26 23:04:33.303540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.272 [2024-11-26 23:04:33.317495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.272 [2024-11-26 23:04:33.317541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.272 [2024-11-26 23:04:33.317555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.902 ms 00:19:54.272 [2024-11-26 23:04:33.317564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.317704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.317716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:54.273 [2024-11-26 23:04:33.317725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:54.273 [2024-11-26 23:04:33.317733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.339134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.339190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.273 [2024-11-26 23:04:33.339205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.376 ms 00:19:54.273 [2024-11-26 23:04:33.339224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.339355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.339378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.273 [2024-11-26 23:04:33.339390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:54.273 [2024-11-26 23:04:33.339401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.339960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.340006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.273 [2024-11-26 23:04:33.340019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:19:54.273 [2024-11-26 23:04:33.340029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.340207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.340220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.273 [2024-11-26 23:04:33.340230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:54.273 [2024-11-26 23:04:33.340239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.349250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.349326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.273 [2024-11-26 23:04:33.349346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.985 ms 00:19:54.273 [2024-11-26 23:04:33.349355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.353344] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:54.273 [2024-11-26 23:04:33.353393] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:54.273 [2024-11-26 23:04:33.353411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.353421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:54.273 [2024-11-26 23:04:33.353431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.923 ms 00:19:54.273 [2024-11-26 23:04:33.353439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.369887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.369936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:54.273 [2024-11-26 23:04:33.369949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.369 ms 00:19:54.273 [2024-11-26 23:04:33.369958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.373111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.373158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:54.273 [2024-11-26 23:04:33.373169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:19:54.273 [2024-11-26 23:04:33.373177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.375995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.376042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:54.273 [2024-11-26 23:04:33.376053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:19:54.273 [2024-11-26 23:04:33.376061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-26 23:04:33.376439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.273 [2024-11-26 23:04:33.376455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:54.273 [2024-11-26 23:04:33.376465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:54.273 [2024-11-26 23:04:33.376473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.403486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.403739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:54.535 [2024-11-26 23:04:33.403759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.982 ms 00:19:54.535 [2024-11-26 23:04:33.403776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.411910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:54.535 [2024-11-26 23:04:33.431123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.431380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:54.535 [2024-11-26 23:04:33.431400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.259 ms 00:19:54.535 [2024-11-26 23:04:33.431419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.431516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.431528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:54.535 [2024-11-26 23:04:33.431538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:54.535 [2024-11-26 23:04:33.431546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.431602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.431612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:54.535 [2024-11-26 23:04:33.431622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:54.535 [2024-11-26 23:04:33.431630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.431664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.431675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:54.535 [2024-11-26 23:04:33.431687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:54.535 [2024-11-26 23:04:33.431695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.431734] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:54.535 [2024-11-26 23:04:33.431746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.431754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:54.535 [2024-11-26 23:04:33.431765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:54.535 [2024-11-26 23:04:33.431775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.437838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.437889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:54.535 [2024-11-26 23:04:33.437908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.042 ms 00:19:54.535 [2024-11-26 23:04:33.437921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.438015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.535 [2024-11-26 23:04:33.438027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:54.535 [2024-11-26 23:04:33.438036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:54.535 [2024-11-26 23:04:33.438044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.535 [2024-11-26 23:04:33.439115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.535 [2024-11-26 23:04:33.440529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.008 ms, result 0 00:19:54.535 [2024-11-26 23:04:33.441872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.535 [2024-11-26 23:04:33.449211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.478  [2024-11-26T23:04:35.549Z] Copying: 14/256 [MB] (14 MBps) [2024-11-26T23:04:36.490Z] Copying: 27/256 [MB] (13 MBps) [2024-11-26T23:04:37.884Z] Copying: 44/256 [MB] (17 MBps) [2024-11-26T23:04:38.455Z] Copying: 56/256 [MB] (11 MBps) [2024-11-26T23:04:39.839Z] Copying: 74/256 [MB] (18 MBps) [2024-11-26T23:04:40.784Z] Copying: 87/256 [MB] (13 MBps) [2024-11-26T23:04:41.724Z] Copying: 98/256 [MB] (10 MBps) [2024-11-26T23:04:42.665Z] Copying: 110/256 [MB] (12 MBps) [2024-11-26T23:04:43.608Z] Copying: 122/256 [MB] (12 MBps) [2024-11-26T23:04:44.554Z] Copying: 134/256 [MB] (12 MBps) [2024-11-26T23:04:45.498Z] Copying: 148080/262144 [kB] (10036 kBps) [2024-11-26T23:04:46.888Z] Copying: 158172/262144 [kB] (10092 kBps) [2024-11-26T23:04:47.473Z] Copying: 175/256 [MB] (20 MBps) [2024-11-26T23:04:48.553Z] Copying: 190/256 [MB] (15 MBps) [2024-11-26T23:04:49.496Z] Copying: 207/256 [MB] (16 MBps) [2024-11-26T23:04:50.888Z] Copying: 220/256 [MB] (13 MBps) [2024-11-26T23:04:51.459Z] Copying: 230/256 [MB] (10 MBps) [2024-11-26T23:04:52.405Z] Copying: 245/256 [MB] (14 MBps) [2024-11-26T23:04:52.405Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-26 23:04:52.060923] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.278 [2024-11-26 23:04:52.063449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.063504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:13.278 [2024-11-26 23:04:52.063521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.278 [2024-11-26 23:04:52.063530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.063555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:13.278 [2024-11-26 23:04:52.064526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.064569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:13.278 [2024-11-26 23:04:52.064581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:20:13.278 [2024-11-26 23:04:52.064589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.064875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.064886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:13.278 [2024-11-26 23:04:52.064896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:13.278 [2024-11-26 23:04:52.064904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.068627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.068651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.278 [2024-11-26 23:04:52.068662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:20:13.278 [2024-11-26 23:04:52.068670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.075746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.075806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.278 [2024-11-26 23:04:52.075819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.040 ms 00:20:13.278 [2024-11-26 23:04:52.075827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.079086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.079307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.278 [2024-11-26 23:04:52.079329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.178 ms 00:20:13.278 [2024-11-26 23:04:52.079337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.084734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.084788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.278 [2024-11-26 23:04:52.084801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.311 ms 00:20:13.278 [2024-11-26 23:04:52.084810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.084951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.084971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.278 [2024-11-26 23:04:52.084980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:13.278 [2024-11-26 23:04:52.084988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.087979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.088030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.278 [2024-11-26 23:04:52.088040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:20:13.278 [2024-11-26 23:04:52.088047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.091103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.091155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.278 [2024-11-26 23:04:52.091164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:20:13.278 [2024-11-26 23:04:52.091171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.093664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.093846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.278 [2024-11-26 23:04:52.093864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:20:13.278 [2024-11-26 23:04:52.093873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.096249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.096320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.278 [2024-11-26 23:04:52.096330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:20:13.278 [2024-11-26 23:04:52.096338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.096385] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.278 [2024-11-26 23:04:52.096404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.096995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.278 [2024-11-26 23:04:52.097232] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.278 [2024-11-26 23:04:52.097242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:20:13.278 [2024-11-26 23:04:52.097251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:13.278 [2024-11-26 23:04:52.097265] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:13.278 [2024-11-26 23:04:52.097274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:13.278 [2024-11-26 23:04:52.097283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:13.278 [2024-11-26 23:04:52.097310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.278 [2024-11-26 23:04:52.097322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.278 [2024-11-26 23:04:52.097331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.278 [2024-11-26 23:04:52.097338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.278 [2024-11-26 23:04:52.097345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.278 [2024-11-26 23:04:52.097353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.097362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.278 [2024-11-26 23:04:52.097371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:20:13.278 [2024-11-26 23:04:52.097379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.100580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.278 [2024-11-26 23:04:52.100747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.278 [2024-11-26 23:04:52.100774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:20:13.278 [2024-11-26 23:04:52.100784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.278 [2024-11-26 23:04:52.100957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.279 [2024-11-26 23:04:52.100967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.279 [2024-11-26 23:04:52.100976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:20:13.279 [2024-11-26 23:04:52.100984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.111809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.112013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.279 [2024-11-26 23:04:52.112032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.112041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.112135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.112145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.279 [2024-11-26 23:04:52.112153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.112162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.112217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.112227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.279 [2024-11-26 23:04:52.112239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.112246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.112266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.112274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.279 [2024-11-26 23:04:52.112282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.112289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.132287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.132375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.279 [2024-11-26 23:04:52.132393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.132406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.147457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.147517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.279 [2024-11-26 23:04:52.147530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.147540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.147614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.147625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.279 [2024-11-26 23:04:52.147636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.147649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.147684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.147694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.279 [2024-11-26 23:04:52.147703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.147712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.147797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.147812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.279 [2024-11-26 23:04:52.147821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.147830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.147873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.147883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.279 [2024-11-26 23:04:52.147892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.147904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.147958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.147970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.279 [2024-11-26 23:04:52.147978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.147987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.148045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.279 [2024-11-26 23:04:52.148057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.279 [2024-11-26 23:04:52.148066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.279 [2024-11-26 23:04:52.148075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.279 [2024-11-26 23:04:52.148259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.784 ms, result 0 00:20:13.541 00:20:13.541 00:20:13.541 23:04:52 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:13.541 23:04:52 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:14.110 23:04:53 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:14.110 [2024-11-26 23:04:53.083247] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:20:14.110 [2024-11-26 23:04:53.083383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89824 ] 00:20:14.110 [2024-11-26 23:04:53.216688] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:14.376 [2024-11-26 23:04:53.251222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.376 [2024-11-26 23:04:53.292492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.376 [2024-11-26 23:04:53.446814] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.376 [2024-11-26 23:04:53.446913] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.638 [2024-11-26 23:04:53.610630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.610715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.638 [2024-11-26 23:04:53.610733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:14.638 [2024-11-26 23:04:53.610743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.613510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.613565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.638 [2024-11-26 23:04:53.613581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:20:14.638 [2024-11-26 23:04:53.613589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.613706] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.638 [2024-11-26 23:04:53.613998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.638 [2024-11-26 23:04:53.614015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.614024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.638 [2024-11-26 23:04:53.614036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:20:14.638 [2024-11-26 23:04:53.614045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.617597] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.638 [2024-11-26 23:04:53.623992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.624442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.638 [2024-11-26 23:04:53.624496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.400 ms 00:20:14.638 [2024-11-26 23:04:53.624523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.624779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.624817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.638 [2024-11-26 23:04:53.624843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:14.638 [2024-11-26 23:04:53.624866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.636996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.637042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.638 [2024-11-26 23:04:53.637058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.999 ms 00:20:14.638 [2024-11-26 23:04:53.637072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.637213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.637226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.638 [2024-11-26 23:04:53.637238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:14.638 [2024-11-26 23:04:53.637246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.637279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.637288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.638 [2024-11-26 23:04:53.637343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:14.638 [2024-11-26 23:04:53.637355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.637386] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:14.638 [2024-11-26 23:04:53.640072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.640243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.638 [2024-11-26 23:04:53.640260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:20:14.638 [2024-11-26 23:04:53.640275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.640342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.640353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.638 [2024-11-26 23:04:53.640362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:14.638 [2024-11-26 23:04:53.640370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.640392] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.638 [2024-11-26 23:04:53.640416] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:14.638 [2024-11-26 23:04:53.640459] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.638 [2024-11-26 23:04:53.640477] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:14.638 [2024-11-26 23:04:53.640596] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.638 [2024-11-26 23:04:53.640611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.638 [2024-11-26 23:04:53.640623] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:14.638 [2024-11-26 23:04:53.640634] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.638 [2024-11-26 23:04:53.640644] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.638 [2024-11-26 23:04:53.640652] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:14.638 [2024-11-26 23:04:53.640660] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.638 [2024-11-26 23:04:53.640674] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.638 [2024-11-26 23:04:53.640682] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.638 [2024-11-26 23:04:53.640690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.640699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.638 [2024-11-26 23:04:53.640707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:14.638 [2024-11-26 23:04:53.640714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.640802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.638 [2024-11-26 23:04:53.640816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.638 [2024-11-26 23:04:53.640829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:14.638 [2024-11-26 23:04:53.640841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.638 [2024-11-26 23:04:53.640948] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.638 [2024-11-26 23:04:53.640960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.638 [2024-11-26 23:04:53.640970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.638 [2024-11-26 23:04:53.640980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.638 [2024-11-26 23:04:53.641002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.638 [2024-11-26 23:04:53.641013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.638 [2024-11-26 23:04:53.641021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:14.638 [2024-11-26 23:04:53.641029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.638 [2024-11-26 23:04:53.641039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:14.638 [2024-11-26 23:04:53.641047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.638 [2024-11-26 23:04:53.641057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.638 [2024-11-26 23:04:53.641065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:14.638 [2024-11-26 23:04:53.641073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.638 [2024-11-26 23:04:53.641081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.638 [2024-11-26 23:04:53.641090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:14.638 [2024-11-26 23:04:53.641101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.638 [2024-11-26 23:04:53.641110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.638 [2024-11-26 23:04:53.641118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:14.638 [2024-11-26 23:04:53.641126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.638 [2024-11-26 23:04:53.641134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.638 [2024-11-26 23:04:53.641143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:14.638 [2024-11-26 23:04:53.641156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.638 [2024-11-26 23:04:53.641164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.639 [2024-11-26 23:04:53.641171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:14.639 [2024-11-26 23:04:53.641177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.639 [2024-11-26 23:04:53.641185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.639 [2024-11-26 23:04:53.641192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:14.639 [2024-11-26 23:04:53.641198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.639 [2024-11-26 23:04:53.641205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.639 [2024-11-26 23:04:53.641211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:14.639 [2024-11-26 23:04:53.641218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.639 [2024-11-26 23:04:53.641224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.639 [2024-11-26 23:04:53.641231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:14.639 [2024-11-26 23:04:53.641238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.639 [2024-11-26 23:04:53.641245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.639 [2024-11-26 23:04:53.641251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:14.639 [2024-11-26 23:04:53.641258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.639 [2024-11-26 23:04:53.641268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.639 [2024-11-26 23:04:53.641275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:14.639 [2024-11-26 23:04:53.641281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.639 [2024-11-26 23:04:53.641288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.639 [2024-11-26 23:04:53.641309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:14.639 [2024-11-26 23:04:53.641317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.639 [2024-11-26 23:04:53.641324] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.639 [2024-11-26 23:04:53.641334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.639 [2024-11-26 23:04:53.641342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.639 [2024-11-26 23:04:53.641349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.639 [2024-11-26 23:04:53.641363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.639 [2024-11-26 23:04:53.641371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.639 [2024-11-26 23:04:53.641378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.639 [2024-11-26 23:04:53.641386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.639 [2024-11-26 23:04:53.641393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.639 [2024-11-26 23:04:53.641401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.639 [2024-11-26 23:04:53.641414] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.639 [2024-11-26 23:04:53.641426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.639 [2024-11-26 23:04:53.641438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:14.639 [2024-11-26 23:04:53.641446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:14.639 [2024-11-26 23:04:53.641454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:14.639 [2024-11-26 23:04:53.641471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:14.639 [2024-11-26 23:04:53.641479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:14.639 [2024-11-26 23:04:53.641487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:14.639 [2024-11-26 23:04:53.641494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:14.639 [2024-11-26 23:04:53.641502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:14.639 [2024-11-26 23:04:53.641509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:14.639 [2024-11-26 23:04:53.641517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:14.639 [2024-11-26 23:04:53.641524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:14.639 [2024-11-26 23:04:53.641531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:14.639 [2024-11-26 23:04:53.641538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:14.639 [2024-11-26 23:04:53.641545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:14.639 [2024-11-26 23:04:53.641555] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.639 [2024-11-26 23:04:53.641564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.639 [2024-11-26 23:04:53.641573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.639 [2024-11-26 23:04:53.641581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.639 [2024-11-26 23:04:53.641590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.639 [2024-11-26 23:04:53.641597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.639 [2024-11-26 23:04:53.641605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.641616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.639 [2024-11-26 23:04:53.641625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:20:14.639 [2024-11-26 23:04:53.641633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.661772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.661825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.639 [2024-11-26 23:04:53.661838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.061 ms 00:20:14.639 [2024-11-26 23:04:53.661854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.661992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.662004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.639 [2024-11-26 23:04:53.662014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:14.639 [2024-11-26 23:04:53.662023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.684969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.685032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.639 [2024-11-26 23:04:53.685058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:20:14.639 [2024-11-26 23:04:53.685074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.685172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.685185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.639 [2024-11-26 23:04:53.685196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.639 [2024-11-26 23:04:53.685207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.685920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.685973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.639 [2024-11-26 23:04:53.685985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:20:14.639 [2024-11-26 23:04:53.685999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.686169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.686180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.639 [2024-11-26 23:04:53.686189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:20:14.639 [2024-11-26 23:04:53.686198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.697985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.698043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.639 [2024-11-26 23:04:53.698056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.761 ms 00:20:14.639 [2024-11-26 23:04:53.698066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.702944] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:14.639 [2024-11-26 23:04:53.702999] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.639 [2024-11-26 23:04:53.703013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.703021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.639 [2024-11-26 23:04:53.703031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.817 ms 00:20:14.639 [2024-11-26 23:04:53.703039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.719569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.719629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.639 [2024-11-26 23:04:53.719642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.409 ms 00:20:14.639 [2024-11-26 23:04:53.719658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.722811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.722992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.639 [2024-11-26 23:04:53.723012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:20:14.639 [2024-11-26 23:04:53.723020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.639 [2024-11-26 23:04:53.725866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.639 [2024-11-26 23:04:53.725912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.639 [2024-11-26 23:04:53.725922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:20:14.640 [2024-11-26 23:04:53.725930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.640 [2024-11-26 23:04:53.726425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.640 [2024-11-26 23:04:53.726479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.640 [2024-11-26 23:04:53.726503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:20:14.640 [2024-11-26 23:04:53.726525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.640 [2024-11-26 23:04:53.756123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.640 [2024-11-26 23:04:53.756385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.640 [2024-11-26 23:04:53.756660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.508 ms 00:20:14.640 [2024-11-26 23:04:53.756685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.900 [2024-11-26 23:04:53.765794] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.900 [2024-11-26 23:04:53.791202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.900 [2024-11-26 23:04:53.791404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.900 [2024-11-26 23:04:53.791465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.423 ms 00:20:14.900 [2024-11-26 23:04:53.791490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.900 [2024-11-26 23:04:53.791631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.900 [2024-11-26 23:04:53.791669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.900 [2024-11-26 23:04:53.791691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:14.900 [2024-11-26 23:04:53.791712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.900 [2024-11-26 23:04:53.791799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.900 [2024-11-26 23:04:53.791824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.900 [2024-11-26 23:04:53.791848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:14.900 [2024-11-26 23:04:53.791942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.900 [2024-11-26 23:04:53.792001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.900 [2024-11-26 23:04:53.792408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.900 [2024-11-26 23:04:53.792475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:14.900 [2024-11-26 23:04:53.792610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.900 [2024-11-26 23:04:53.792703] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.900 [2024-11-26 23:04:53.792732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.900 [2024-11-26 23:04:53.792754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.900 [2024-11-26 23:04:53.792777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:14.900 [2024-11-26 23:04:53.792797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.901 [2024-11-26 23:04:53.799902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.901 [2024-11-26 23:04:53.800074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.901 [2024-11-26 23:04:53.800141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.067 ms 00:20:14.901 [2024-11-26 23:04:53.800164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.901 [2024-11-26 23:04:53.800394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.901 [2024-11-26 23:04:53.800454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.901 [2024-11-26 23:04:53.800476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:14.901 [2024-11-26 23:04:53.800496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.901 [2024-11-26 23:04:53.801770] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.901 [2024-11-26 23:04:53.803333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 190.755 ms, result 0 00:20:14.901 [2024-11-26 23:04:53.804791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.901 [2024-11-26 23:04:53.812878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.165  [2024-11-26T23:04:54.292Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-11-26 23:04:54.161193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.165 [2024-11-26 23:04:54.162102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.165 [2024-11-26 23:04:54.162145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.165 [2024-11-26 23:04:54.162158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.165 [2024-11-26 23:04:54.162166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.165 [2024-11-26 23:04:54.162187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.165 [2024-11-26 23:04:54.162985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.165 [2024-11-26 23:04:54.163024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.165 [2024-11-26 23:04:54.163036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:20:15.165 [2024-11-26 23:04:54.163045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.165 [2024-11-26 23:04:54.166055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.165 [2024-11-26 23:04:54.166103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.165 [2024-11-26 23:04:54.166114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:20:15.165 [2024-11-26 23:04:54.166122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.165 [2024-11-26 23:04:54.170605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.165 [2024-11-26 23:04:54.170785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.165 [2024-11-26 23:04:54.170802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.467 ms 00:20:15.165 [2024-11-26 23:04:54.170810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.177810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.177971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.166 [2024-11-26 23:04:54.177989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.858 ms 00:20:15.166 [2024-11-26 23:04:54.177997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.180769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.180813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.166 [2024-11-26 23:04:54.180823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:20:15.166 [2024-11-26 23:04:54.180831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.185769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.185825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.166 [2024-11-26 23:04:54.185835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.899 ms 00:20:15.166 [2024-11-26 23:04:54.185846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.185974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.185988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.166 [2024-11-26 23:04:54.185998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:15.166 [2024-11-26 23:04:54.186006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.189175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.189218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.166 [2024-11-26 23:04:54.189227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.151 ms 00:20:15.166 [2024-11-26 23:04:54.189234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.192022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.192181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.166 [2024-11-26 23:04:54.192198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.748 ms 00:20:15.166 [2024-11-26 23:04:54.192206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.194513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.194571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.166 [2024-11-26 23:04:54.194583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:20:15.166 [2024-11-26 23:04:54.194590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.196718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.166 [2024-11-26 23:04:54.196764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.166 [2024-11-26 23:04:54.196774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:20:15.166 [2024-11-26 23:04:54.196780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.166 [2024-11-26 23:04:54.196818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.166 [2024-11-26 23:04:54.196836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.166 [2024-11-26 23:04:54.196909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.196997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.167 [2024-11-26 23:04:54.197105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.168 [2024-11-26 23:04:54.197248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.169 [2024-11-26 23:04:54.197434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.170 [2024-11-26 23:04:54.197646] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.170 [2024-11-26 23:04:54.197655] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:20:15.170 [2024-11-26 23:04:54.197664] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.171 [2024-11-26 23:04:54.197672] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.171 [2024-11-26 23:04:54.197680] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.171 [2024-11-26 23:04:54.197691] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.171 [2024-11-26 23:04:54.197699] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.171 [2024-11-26 23:04:54.197707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.171 [2024-11-26 23:04:54.197715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.171 [2024-11-26 23:04:54.197721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.171 [2024-11-26 23:04:54.197727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.171 [2024-11-26 23:04:54.197735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.171 [2024-11-26 23:04:54.197742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.171 [2024-11-26 23:04:54.197759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:20:15.171 [2024-11-26 23:04:54.197767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.171 [2024-11-26 23:04:54.200330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.171 [2024-11-26 23:04:54.200360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.171 [2024-11-26 23:04:54.200379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:20:15.171 [2024-11-26 23:04:54.200387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.171 [2024-11-26 23:04:54.200541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.171 [2024-11-26 23:04:54.200552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.171 [2024-11-26 23:04:54.200562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:15.171 [2024-11-26 23:04:54.200569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.171 [2024-11-26 23:04:54.210263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.171 [2024-11-26 23:04:54.210339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.171 [2024-11-26 23:04:54.210351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.171 [2024-11-26 23:04:54.210360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.171 [2024-11-26 23:04:54.210484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.171 [2024-11-26 23:04:54.210497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.171 [2024-11-26 23:04:54.210509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.171 [2024-11-26 23:04:54.210517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.171 [2024-11-26 23:04:54.210586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.171 [2024-11-26 23:04:54.210598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.171 [2024-11-26 23:04:54.210609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.171 [2024-11-26 23:04:54.210618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.210636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.210645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.172 [2024-11-26 23:04:54.210653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.210661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.229536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.229791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.172 [2024-11-26 23:04:54.229811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.229820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.244554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.244757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.172 [2024-11-26 23:04:54.244777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.244787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.244846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.244856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.172 [2024-11-26 23:04:54.244876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.244890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.244929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.244939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.172 [2024-11-26 23:04:54.244949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.244958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.245055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.245068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.172 [2024-11-26 23:04:54.245078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.245086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.245131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.245141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.172 [2024-11-26 23:04:54.245150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.245159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.245215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.245226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.172 [2024-11-26 23:04:54.245235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.172 [2024-11-26 23:04:54.245244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.172 [2024-11-26 23:04:54.245333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.172 [2024-11-26 23:04:54.245346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.173 [2024-11-26 23:04:54.245357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.173 [2024-11-26 23:04:54.245367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.173 [2024-11-26 23:04:54.245549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.411 ms, result 0 00:20:15.435 00:20:15.435 00:20:15.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.435 23:04:54 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89848 00:20:15.435 23:04:54 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:15.435 23:04:54 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89848 00:20:15.435 23:04:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89848 ']' 00:20:15.435 23:04:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.435 23:04:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.435 23:04:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.435 23:04:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.435 23:04:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:15.695 [2024-11-26 23:04:54.632816] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:20:15.695 [2024-11-26 23:04:54.632965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89848 ] 00:20:15.695 [2024-11-26 23:04:54.772333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:15.695 [2024-11-26 23:04:54.800608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.955 [2024-11-26 23:04:54.842576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.525 23:04:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.525 23:04:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:16.525 23:04:55 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:16.785 [2024-11-26 23:04:55.710101] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.785 [2024-11-26 23:04:55.710193] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.785 [2024-11-26 23:04:55.890404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.785 [2024-11-26 23:04:55.890472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.785 [2024-11-26 23:04:55.890496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:16.785 [2024-11-26 23:04:55.890506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.785 [2024-11-26 23:04:55.893249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.785 [2024-11-26 23:04:55.893479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.785 [2024-11-26 23:04:55.893509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.717 ms 00:20:16.785 [2024-11-26 23:04:55.893518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.785 [2024-11-26 23:04:55.893742] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.785 [2024-11-26 23:04:55.894065] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.785 [2024-11-26 23:04:55.894089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.785 [2024-11-26 23:04:55.894099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.785 [2024-11-26 23:04:55.894113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:16.785 [2024-11-26 23:04:55.894122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.785 [2024-11-26 23:04:55.896596] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.785 [2024-11-26 23:04:55.901485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.785 [2024-11-26 23:04:55.901541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.785 [2024-11-26 23:04:55.901553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.899 ms 00:20:16.785 [2024-11-26 23:04:55.901563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.785 [2024-11-26 23:04:55.901654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.785 [2024-11-26 23:04:55.901671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.785 [2024-11-26 23:04:55.901681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:16.785 [2024-11-26 23:04:55.901694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.913591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.047 [2024-11-26 23:04:55.913642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.047 [2024-11-26 23:04:55.913655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.838 ms 00:20:17.047 [2024-11-26 23:04:55.913669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.913796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.047 [2024-11-26 23:04:55.913810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.047 [2024-11-26 23:04:55.913824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:17.047 [2024-11-26 23:04:55.913835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.913867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.047 [2024-11-26 23:04:55.913877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.047 [2024-11-26 23:04:55.913886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:17.047 [2024-11-26 23:04:55.913895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.913925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.047 [2024-11-26 23:04:55.916693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.047 [2024-11-26 23:04:55.916736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.047 [2024-11-26 23:04:55.916749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:20:17.047 [2024-11-26 23:04:55.916762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.916808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.047 [2024-11-26 23:04:55.916816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.047 [2024-11-26 23:04:55.916827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:17.047 [2024-11-26 23:04:55.916839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.916865] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.047 [2024-11-26 23:04:55.916889] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.047 [2024-11-26 23:04:55.916942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.047 [2024-11-26 23:04:55.916958] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.047 [2024-11-26 23:04:55.917075] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.047 [2024-11-26 23:04:55.917086] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.047 [2024-11-26 23:04:55.917103] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.047 [2024-11-26 23:04:55.917114] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917144] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.047 [2024-11-26 23:04:55.917154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.047 [2024-11-26 23:04:55.917165] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.047 [2024-11-26 23:04:55.917176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.047 [2024-11-26 23:04:55.917183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.047 [2024-11-26 23:04:55.917194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.047 [2024-11-26 23:04:55.917202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:20:17.047 [2024-11-26 23:04:55.917212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.917322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.047 [2024-11-26 23:04:55.917335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.047 [2024-11-26 23:04:55.917345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:17.047 [2024-11-26 23:04:55.917356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.047 [2024-11-26 23:04:55.917466] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.047 [2024-11-26 23:04:55.917486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.047 [2024-11-26 23:04:55.917495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.047 [2024-11-26 23:04:55.917531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.047 [2024-11-26 23:04:55.917567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.047 [2024-11-26 23:04:55.917584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.047 [2024-11-26 23:04:55.917594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.047 [2024-11-26 23:04:55.917602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.047 [2024-11-26 23:04:55.917611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.047 [2024-11-26 23:04:55.917620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.047 [2024-11-26 23:04:55.917630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.047 [2024-11-26 23:04:55.917647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.047 [2024-11-26 23:04:55.917688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.047 [2024-11-26 23:04:55.917716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.047 [2024-11-26 23:04:55.917738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.047 [2024-11-26 23:04:55.917765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.047 [2024-11-26 23:04:55.917787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.047 [2024-11-26 23:04:55.917802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.047 [2024-11-26 23:04:55.917814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.047 [2024-11-26 23:04:55.917821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.047 [2024-11-26 23:04:55.917830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.047 [2024-11-26 23:04:55.917836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.047 [2024-11-26 23:04:55.917844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.047 [2024-11-26 23:04:55.917860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.047 [2024-11-26 23:04:55.917866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917875] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.047 [2024-11-26 23:04:55.917883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.047 [2024-11-26 23:04:55.917892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.047 [2024-11-26 23:04:55.917910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.047 [2024-11-26 23:04:55.917916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.047 [2024-11-26 23:04:55.917925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.047 [2024-11-26 23:04:55.917932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.047 [2024-11-26 23:04:55.917943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.047 [2024-11-26 23:04:55.917954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.047 [2024-11-26 23:04:55.917966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.047 [2024-11-26 23:04:55.917980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.047 [2024-11-26 23:04:55.917995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.047 [2024-11-26 23:04:55.918002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.047 [2024-11-26 23:04:55.918012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.047 [2024-11-26 23:04:55.918020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.047 [2024-11-26 23:04:55.918029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.047 [2024-11-26 23:04:55.918041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.047 [2024-11-26 23:04:55.918050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.047 [2024-11-26 23:04:55.918057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.047 [2024-11-26 23:04:55.918067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.047 [2024-11-26 23:04:55.918074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.048 [2024-11-26 23:04:55.918085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.048 [2024-11-26 23:04:55.918093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.048 [2024-11-26 23:04:55.918105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.048 [2024-11-26 23:04:55.918113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.048 [2024-11-26 23:04:55.918123] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.048 [2024-11-26 23:04:55.918131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.048 [2024-11-26 23:04:55.918146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.048 [2024-11-26 23:04:55.918153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.048 [2024-11-26 23:04:55.918163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.048 [2024-11-26 23:04:55.918171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.048 [2024-11-26 23:04:55.918182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.918190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.048 [2024-11-26 23:04:55.918201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:20:17.048 [2024-11-26 23:04:55.918210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.938619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.938699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.048 [2024-11-26 23:04:55.938716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.303 ms 00:20:17.048 [2024-11-26 23:04:55.938728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.938875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.938887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.048 [2024-11-26 23:04:55.938899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:17.048 [2024-11-26 23:04:55.938908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.956282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.956348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.048 [2024-11-26 23:04:55.956368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.345 ms 00:20:17.048 [2024-11-26 23:04:55.956379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.956458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.956469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.048 [2024-11-26 23:04:55.956481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.048 [2024-11-26 23:04:55.956497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.957190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.957225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.048 [2024-11-26 23:04:55.957240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:20:17.048 [2024-11-26 23:04:55.957256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.957450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.957461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.048 [2024-11-26 23:04:55.957472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:20:17.048 [2024-11-26 23:04:55.957480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.969212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.969264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.048 [2024-11-26 23:04:55.969281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.695 ms 00:20:17.048 [2024-11-26 23:04:55.969290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:55.986766] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:17.048 [2024-11-26 23:04:55.986972] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.048 [2024-11-26 23:04:55.987000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:55.987010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.048 [2024-11-26 23:04:55.987024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.546 ms 00:20:17.048 [2024-11-26 23:04:55.987033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.006867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.006929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.048 [2024-11-26 23:04:56.006949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.770 ms 00:20:17.048 [2024-11-26 23:04:56.006961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.010133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.010352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.048 [2024-11-26 23:04:56.010378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:20:17.048 [2024-11-26 23:04:56.010386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.013435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.013594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.048 [2024-11-26 23:04:56.013617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.994 ms 00:20:17.048 [2024-11-26 23:04:56.013625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.014327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.014373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.048 [2024-11-26 23:04:56.014389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:20:17.048 [2024-11-26 23:04:56.014404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.044465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.044751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.048 [2024-11-26 23:04:56.044783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.025 ms 00:20:17.048 [2024-11-26 23:04:56.044798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.054109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.048 [2024-11-26 23:04:56.079151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.079219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.048 [2024-11-26 23:04:56.079238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.158 ms 00:20:17.048 [2024-11-26 23:04:56.079258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.079416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.079433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.048 [2024-11-26 23:04:56.079470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:17.048 [2024-11-26 23:04:56.079483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.079562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.079575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.048 [2024-11-26 23:04:56.079585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:17.048 [2024-11-26 23:04:56.079597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.079628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.079655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.048 [2024-11-26 23:04:56.079664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.048 [2024-11-26 23:04:56.079674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.079719] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.048 [2024-11-26 23:04:56.079734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.079743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.048 [2024-11-26 23:04:56.079754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:17.048 [2024-11-26 23:04:56.079763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.086868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.048 [2024-11-26 23:04:56.087054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.048 [2024-11-26 23:04:56.087083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.074 ms 00:20:17.048 [2024-11-26 23:04:56.087093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.048 [2024-11-26 23:04:56.087195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.049 [2024-11-26 23:04:56.087206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.049 [2024-11-26 23:04:56.087218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:17.049 [2024-11-26 23:04:56.087227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.049 [2024-11-26 23:04:56.088550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.049 [2024-11-26 23:04:56.089970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 197.732 ms, result 0 00:20:17.049 [2024-11-26 23:04:56.091486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.049 Some configs were skipped because the RPC state that can call them passed over. 00:20:17.049 23:04:56 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:17.309 [2024-11-26 23:04:56.325890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.309 true 00:20:17.309 [2024-11-26 23:04:56.326147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:17.309 [2024-11-26 23:04:56.326175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.138 ms 00:20:17.309 [2024-11-26 23:04:56.326188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.309 [2024-11-26 23:04:56.326235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.493 ms, result 0 00:20:17.309 23:04:56 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:17.588 [2024-11-26 23:04:56.545775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.588 [2024-11-26 23:04:56.545960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:17.588 [2024-11-26 23:04:56.546033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:20:17.588 [2024-11-26 23:04:56.546058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.588 [2024-11-26 23:04:56.546124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.126 ms, result 0 00:20:17.588 true 00:20:17.588 23:04:56 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89848 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89848 ']' 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89848 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89848 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89848' 00:20:17.588 killing process with pid 89848 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89848 00:20:17.588 23:04:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89848 00:20:17.851 [2024-11-26 23:04:56.801854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.801933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.851 [2024-11-26 23:04:56.801951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:17.851 [2024-11-26 23:04:56.801965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.801994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.851 [2024-11-26 23:04:56.803016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.803069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.851 [2024-11-26 23:04:56.803084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:20:17.851 [2024-11-26 23:04:56.803095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.803440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.803453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.851 [2024-11-26 23:04:56.803464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:20:17.851 [2024-11-26 23:04:56.803473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.808150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.808196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.851 [2024-11-26 23:04:56.808213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.650 ms 00:20:17.851 [2024-11-26 23:04:56.808222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.815424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.815633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.851 [2024-11-26 23:04:56.815663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.145 ms 00:20:17.851 [2024-11-26 23:04:56.815677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.818784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.818946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.851 [2024-11-26 23:04:56.818968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:20:17.851 [2024-11-26 23:04:56.818976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.824551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.824605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.851 [2024-11-26 23:04:56.824623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.429 ms 00:20:17.851 [2024-11-26 23:04:56.824632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.824800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.824819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.851 [2024-11-26 23:04:56.824831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:17.851 [2024-11-26 23:04:56.824839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.828382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.828428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.851 [2024-11-26 23:04:56.828445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.511 ms 00:20:17.851 [2024-11-26 23:04:56.828452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.831345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.831388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.851 [2024-11-26 23:04:56.831401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:20:17.851 [2024-11-26 23:04:56.831409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.833757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.833909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.851 [2024-11-26 23:04:56.833932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:20:17.851 [2024-11-26 23:04:56.833940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.836246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.851 [2024-11-26 23:04:56.836315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.851 [2024-11-26 23:04:56.836330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:20:17.851 [2024-11-26 23:04:56.836339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.851 [2024-11-26 23:04:56.836388] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.851 [2024-11-26 23:04:56.836407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.851 [2024-11-26 23:04:56.836723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.836991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.852 [2024-11-26 23:04:56.837538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.852 [2024-11-26 23:04:56.837554] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:20:17.852 [2024-11-26 23:04:56.837565] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.852 [2024-11-26 23:04:56.837577] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.852 [2024-11-26 23:04:56.837586] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.852 [2024-11-26 23:04:56.837598] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.852 [2024-11-26 23:04:56.837611] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.852 [2024-11-26 23:04:56.837623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.852 [2024-11-26 23:04:56.837632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.852 [2024-11-26 23:04:56.837644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.852 [2024-11-26 23:04:56.837652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.852 [2024-11-26 23:04:56.837665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.852 [2024-11-26 23:04:56.837674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.852 [2024-11-26 23:04:56.837689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.279 ms 00:20:17.852 [2024-11-26 23:04:56.837699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.852 [2024-11-26 23:04:56.840868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.852 [2024-11-26 23:04:56.841036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.852 [2024-11-26 23:04:56.841061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:20:17.852 [2024-11-26 23:04:56.841071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.852 [2024-11-26 23:04:56.841227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.852 [2024-11-26 23:04:56.841237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.852 [2024-11-26 23:04:56.841250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:17.852 [2024-11-26 23:04:56.841261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.852380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.852434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.853 [2024-11-26 23:04:56.852449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.852458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.852574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.852585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.853 [2024-11-26 23:04:56.852601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.852613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.852666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.852676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.853 [2024-11-26 23:04:56.852687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.852696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.852724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.852733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.853 [2024-11-26 23:04:56.852744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.852752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.873641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.873706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.853 [2024-11-26 23:04:56.873728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.873742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.889706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.889774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.853 [2024-11-26 23:04:56.889794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.889808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.889908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.889926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.853 [2024-11-26 23:04:56.889943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.889952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.889991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.890000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.853 [2024-11-26 23:04:56.890012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.890021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.890119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.890129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.853 [2024-11-26 23:04:56.890140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.890149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.890190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.890200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.853 [2024-11-26 23:04:56.890215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.890223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.890280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.890343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.853 [2024-11-26 23:04:56.890356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.890368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.890435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.853 [2024-11-26 23:04:56.890447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.853 [2024-11-26 23:04:56.890459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.853 [2024-11-26 23:04:56.890471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.853 [2024-11-26 23:04:56.890666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.767 ms, result 0 00:20:18.114 23:04:57 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:18.375 [2024-11-26 23:04:57.298200] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:20:18.375 [2024-11-26 23:04:57.298388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89885 ] 00:20:18.375 [2024-11-26 23:04:57.436896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:18.375 [2024-11-26 23:04:57.466497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.636 [2024-11-26 23:04:57.507410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.636 [2024-11-26 23:04:57.660445] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.636 [2024-11-26 23:04:57.660544] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.901 [2024-11-26 23:04:57.824359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.824424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.901 [2024-11-26 23:04:57.824441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:18.901 [2024-11-26 23:04:57.824451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.827190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.827243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.901 [2024-11-26 23:04:57.827256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.716 ms 00:20:18.901 [2024-11-26 23:04:57.827269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.827398] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.901 [2024-11-26 23:04:57.827723] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.901 [2024-11-26 23:04:57.827740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.827749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.901 [2024-11-26 23:04:57.827759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:20:18.901 [2024-11-26 23:04:57.827767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.830088] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:18.901 [2024-11-26 23:04:57.835028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.835086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:18.901 [2024-11-26 23:04:57.835098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:20:18.901 [2024-11-26 23:04:57.835108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.835203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.835215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:18.901 [2024-11-26 23:04:57.835225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:18.901 [2024-11-26 23:04:57.835234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.846596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.846643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.901 [2024-11-26 23:04:57.846656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.310 ms 00:20:18.901 [2024-11-26 23:04:57.846669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.846845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.846858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.901 [2024-11-26 23:04:57.846868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:18.901 [2024-11-26 23:04:57.846879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.846911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.846926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.901 [2024-11-26 23:04:57.846938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:18.901 [2024-11-26 23:04:57.846946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.846968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:18.901 [2024-11-26 23:04:57.849682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.849723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.901 [2024-11-26 23:04:57.849738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:20:18.901 [2024-11-26 23:04:57.849750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.849799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.849809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.901 [2024-11-26 23:04:57.849818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:18.901 [2024-11-26 23:04:57.849826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.849845] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:18.901 [2024-11-26 23:04:57.849870] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:18.901 [2024-11-26 23:04:57.849916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:18.901 [2024-11-26 23:04:57.849940] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:18.901 [2024-11-26 23:04:57.850054] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:18.901 [2024-11-26 23:04:57.850069] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.901 [2024-11-26 23:04:57.850081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:18.901 [2024-11-26 23:04:57.850092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.901 [2024-11-26 23:04:57.850106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.901 [2024-11-26 23:04:57.850116] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:18.901 [2024-11-26 23:04:57.850124] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.901 [2024-11-26 23:04:57.850134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:18.901 [2024-11-26 23:04:57.850145] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:18.901 [2024-11-26 23:04:57.850153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.850162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.901 [2024-11-26 23:04:57.850174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:18.901 [2024-11-26 23:04:57.850181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.850269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.901 [2024-11-26 23:04:57.850278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.901 [2024-11-26 23:04:57.850286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:18.901 [2024-11-26 23:04:57.850293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.901 [2024-11-26 23:04:57.850424] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.901 [2024-11-26 23:04:57.850436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.901 [2024-11-26 23:04:57.850450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.901 [2024-11-26 23:04:57.850459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.901 [2024-11-26 23:04:57.850478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.901 [2024-11-26 23:04:57.850486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.901 [2024-11-26 23:04:57.850497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:18.901 [2024-11-26 23:04:57.850507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.901 [2024-11-26 23:04:57.850516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:18.901 [2024-11-26 23:04:57.850524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.901 [2024-11-26 23:04:57.850533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.901 [2024-11-26 23:04:57.850541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:18.901 [2024-11-26 23:04:57.850550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.901 [2024-11-26 23:04:57.850557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.901 [2024-11-26 23:04:57.850566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:18.901 [2024-11-26 23:04:57.850573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.901 [2024-11-26 23:04:57.850581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.902 [2024-11-26 23:04:57.850589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:18.902 [2024-11-26 23:04:57.850598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.902 [2024-11-26 23:04:57.850620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.902 [2024-11-26 23:04:57.850644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.902 [2024-11-26 23:04:57.850652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.902 [2024-11-26 23:04:57.850668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.902 [2024-11-26 23:04:57.850692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.902 [2024-11-26 23:04:57.850708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.902 [2024-11-26 23:04:57.850715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.902 [2024-11-26 23:04:57.850730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.902 [2024-11-26 23:04:57.850737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.902 [2024-11-26 23:04:57.850752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.902 [2024-11-26 23:04:57.850759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:18.902 [2024-11-26 23:04:57.850766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.902 [2024-11-26 23:04:57.850773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:18.902 [2024-11-26 23:04:57.850782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:18.902 [2024-11-26 23:04:57.850790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:18.902 [2024-11-26 23:04:57.850805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:18.902 [2024-11-26 23:04:57.850813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850819] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.902 [2024-11-26 23:04:57.850828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.902 [2024-11-26 23:04:57.850836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.902 [2024-11-26 23:04:57.850844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.902 [2024-11-26 23:04:57.850853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.902 [2024-11-26 23:04:57.850860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.902 [2024-11-26 23:04:57.850867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.902 [2024-11-26 23:04:57.850874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.902 [2024-11-26 23:04:57.850883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.902 [2024-11-26 23:04:57.850891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.902 [2024-11-26 23:04:57.850900] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.902 [2024-11-26 23:04:57.850913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.902 [2024-11-26 23:04:57.850924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:18.902 [2024-11-26 23:04:57.850932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:18.902 [2024-11-26 23:04:57.850940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:18.902 [2024-11-26 23:04:57.850947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:18.902 [2024-11-26 23:04:57.850954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:18.902 [2024-11-26 23:04:57.850962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:18.902 [2024-11-26 23:04:57.850969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:18.902 [2024-11-26 23:04:57.850977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:18.902 [2024-11-26 23:04:57.850984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:18.902 [2024-11-26 23:04:57.850992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:18.902 [2024-11-26 23:04:57.850999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:18.902 [2024-11-26 23:04:57.851007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:18.902 [2024-11-26 23:04:57.851015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:18.902 [2024-11-26 23:04:57.851023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:18.902 [2024-11-26 23:04:57.851030] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.902 [2024-11-26 23:04:57.851040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.902 [2024-11-26 23:04:57.851050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.902 [2024-11-26 23:04:57.851059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.902 [2024-11-26 23:04:57.851066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.902 [2024-11-26 23:04:57.851074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.902 [2024-11-26 23:04:57.851082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.851092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.902 [2024-11-26 23:04:57.851100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:20:18.902 [2024-11-26 23:04:57.851107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.871877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.872067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.902 [2024-11-26 23:04:57.872289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.698 ms 00:20:18.902 [2024-11-26 23:04:57.873114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.873424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.873472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.902 [2024-11-26 23:04:57.873687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:18.902 [2024-11-26 23:04:57.873731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.900030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.900266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.902 [2024-11-26 23:04:57.900409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.242 ms 00:20:18.902 [2024-11-26 23:04:57.900463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.900612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.900657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.902 [2024-11-26 23:04:57.900687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:18.902 [2024-11-26 23:04:57.900715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.901515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.901685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.902 [2024-11-26 23:04:57.901761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:20:18.902 [2024-11-26 23:04:57.901796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.902038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.902073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.902 [2024-11-26 23:04:57.902101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:20:18.902 [2024-11-26 23:04:57.902129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.914378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.914539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.902 [2024-11-26 23:04:57.914613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:20:18.902 [2024-11-26 23:04:57.914640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.919676] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:18.902 [2024-11-26 23:04:57.919852] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.902 [2024-11-26 23:04:57.919924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.919948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.902 [2024-11-26 23:04:57.919969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.107 ms 00:20:18.902 [2024-11-26 23:04:57.919988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.902 [2024-11-26 23:04:57.936352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.902 [2024-11-26 23:04:57.936518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.902 [2024-11-26 23:04:57.936579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.277 ms 00:20:18.902 [2024-11-26 23:04:57.936603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:57.939781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:57.939942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.903 [2024-11-26 23:04:57.940000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:20:18.903 [2024-11-26 23:04:57.940023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:57.942847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:57.943003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.903 [2024-11-26 23:04:57.943060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:20:18.903 [2024-11-26 23:04:57.943083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:57.944244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:57.944454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.903 [2024-11-26 23:04:57.944542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:18.903 [2024-11-26 23:04:57.944571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:57.974816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:57.975106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.903 [2024-11-26 23:04:57.975129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.195 ms 00:20:18.903 [2024-11-26 23:04:57.975147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:57.984240] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.903 [2024-11-26 23:04:58.009657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:58.009784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.903 [2024-11-26 23:04:58.009815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.411 ms 00:20:18.903 [2024-11-26 23:04:58.009827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:58.009953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:58.009970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.903 [2024-11-26 23:04:58.009980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:18.903 [2024-11-26 23:04:58.009990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:58.010071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:58.010082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.903 [2024-11-26 23:04:58.010091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:18.903 [2024-11-26 23:04:58.010101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:58.010139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:58.010153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.903 [2024-11-26 23:04:58.010166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:18.903 [2024-11-26 23:04:58.010179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:58.010223] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.903 [2024-11-26 23:04:58.010235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:58.010245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.903 [2024-11-26 23:04:58.010254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:18.903 [2024-11-26 23:04:58.010262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:58.017364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:58.017414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.903 [2024-11-26 23:04:58.017427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.075 ms 00:20:18.903 [2024-11-26 23:04:58.017450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:58.017595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.903 [2024-11-26 23:04:58.017609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.903 [2024-11-26 23:04:58.017618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:18.903 [2024-11-26 23:04:58.017627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.903 [2024-11-26 23:04:58.018998] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.903 [2024-11-26 23:04:58.020480] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 194.235 ms, result 0 00:20:18.903 [2024-11-26 23:04:58.022474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.165 [2024-11-26 23:04:58.029417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.109  [2024-11-26T23:05:00.185Z] Copying: 16/256 [MB] (16 MBps) [2024-11-26T23:05:01.125Z] Copying: 31/256 [MB] (15 MBps) [2024-11-26T23:05:02.511Z] Copying: 49/256 [MB] (17 MBps) [2024-11-26T23:05:03.085Z] Copying: 59/256 [MB] (10 MBps) [2024-11-26T23:05:04.467Z] Copying: 71/256 [MB] (12 MBps) [2024-11-26T23:05:05.415Z] Copying: 83/256 [MB] (12 MBps) [2024-11-26T23:05:06.354Z] Copying: 100/256 [MB] (16 MBps) [2024-11-26T23:05:07.293Z] Copying: 116/256 [MB] (16 MBps) [2024-11-26T23:05:08.242Z] Copying: 128/256 [MB] (12 MBps) [2024-11-26T23:05:09.186Z] Copying: 139/256 [MB] (10 MBps) [2024-11-26T23:05:10.124Z] Copying: 154/256 [MB] (14 MBps) [2024-11-26T23:05:11.510Z] Copying: 173/256 [MB] (19 MBps) [2024-11-26T23:05:12.446Z] Copying: 188/256 [MB] (14 MBps) [2024-11-26T23:05:13.392Z] Copying: 208/256 [MB] (19 MBps) [2024-11-26T23:05:14.350Z] Copying: 220/256 [MB] (12 MBps) [2024-11-26T23:05:15.339Z] Copying: 230/256 [MB] (10 MBps) [2024-11-26T23:05:16.284Z] Copying: 242/256 [MB] (11 MBps) [2024-11-26T23:05:16.544Z] Copying: 252/256 [MB] (10 MBps) [2024-11-26T23:05:17.118Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-26 23:05:16.823827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.991 [2024-11-26 23:05:16.826517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.826576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.991 [2024-11-26 23:05:16.826594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:37.991 [2024-11-26 23:05:16.826604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.826633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.991 [2024-11-26 23:05:16.827647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.827686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.991 [2024-11-26 23:05:16.827700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:20:37.991 [2024-11-26 23:05:16.827721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.828076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.828350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.991 [2024-11-26 23:05:16.828369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:37.991 [2024-11-26 23:05:16.828379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.832136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.832341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.991 [2024-11-26 23:05:16.832364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.735 ms 00:20:37.991 [2024-11-26 23:05:16.832375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.840471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.840520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.991 [2024-11-26 23:05:16.840543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.060 ms 00:20:37.991 [2024-11-26 23:05:16.840561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.843239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.843317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.991 [2024-11-26 23:05:16.843329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:20:37.991 [2024-11-26 23:05:16.843337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.849058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.849116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.991 [2024-11-26 23:05:16.849128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.667 ms 00:20:37.991 [2024-11-26 23:05:16.849138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.849289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.849316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.991 [2024-11-26 23:05:16.849334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:37.991 [2024-11-26 23:05:16.849342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.852692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.852747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.991 [2024-11-26 23:05:16.852758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.329 ms 00:20:37.991 [2024-11-26 23:05:16.852766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.856482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.856552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.991 [2024-11-26 23:05:16.856565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.664 ms 00:20:37.991 [2024-11-26 23:05:16.856574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.991 [2024-11-26 23:05:16.858920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.991 [2024-11-26 23:05:16.858974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.992 [2024-11-26 23:05:16.858986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.288 ms 00:20:37.992 [2024-11-26 23:05:16.858994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.992 [2024-11-26 23:05:16.861317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.992 [2024-11-26 23:05:16.861367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.992 [2024-11-26 23:05:16.861378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:20:37.992 [2024-11-26 23:05:16.861386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.992 [2024-11-26 23:05:16.861435] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.992 [2024-11-26 23:05:16.861455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.861996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.862147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.992 [2024-11-26 23:05:16.863143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.993 [2024-11-26 23:05:16.863354] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.993 [2024-11-26 23:05:16.863365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e0e83087-eb0e-4185-b4d1-3f0b13ab70d0 00:20:37.993 [2024-11-26 23:05:16.863375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.993 [2024-11-26 23:05:16.863384] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.993 [2024-11-26 23:05:16.863394] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.993 [2024-11-26 23:05:16.863403] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.993 [2024-11-26 23:05:16.863411] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.993 [2024-11-26 23:05:16.863424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.993 [2024-11-26 23:05:16.863433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.993 [2024-11-26 23:05:16.863440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.993 [2024-11-26 23:05:16.863449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.993 [2024-11-26 23:05:16.863459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.993 [2024-11-26 23:05:16.863469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.993 [2024-11-26 23:05:16.863479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:20:37.993 [2024-11-26 23:05:16.863488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.866838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.993 [2024-11-26 23:05:16.867042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.993 [2024-11-26 23:05:16.867075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:20:37.993 [2024-11-26 23:05:16.867088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.867251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.993 [2024-11-26 23:05:16.867262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.993 [2024-11-26 23:05:16.867272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:20:37.993 [2024-11-26 23:05:16.867280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.878313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.878368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.993 [2024-11-26 23:05:16.878385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.878398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.878498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.878510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.993 [2024-11-26 23:05:16.878519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.878528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.878597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.878608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.993 [2024-11-26 23:05:16.878618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.878630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.878653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.878663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.993 [2024-11-26 23:05:16.878702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.878711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.898785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.898852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.993 [2024-11-26 23:05:16.898874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.898891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.915124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.915188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.993 [2024-11-26 23:05:16.915201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.915222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.915289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.915355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.993 [2024-11-26 23:05:16.915366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.915375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.915418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.915428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.993 [2024-11-26 23:05:16.915438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.915447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.915541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.915553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.993 [2024-11-26 23:05:16.915563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.915572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.915610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.915624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.993 [2024-11-26 23:05:16.915639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.915648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.915707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.915717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.993 [2024-11-26 23:05:16.915727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.915736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.993 [2024-11-26 23:05:16.915803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.993 [2024-11-26 23:05:16.915815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.993 [2024-11-26 23:05:16.915825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.993 [2024-11-26 23:05:16.915833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.994 [2024-11-26 23:05:16.916046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.491 ms, result 0 00:20:38.255 00:20:38.255 00:20:38.255 23:05:17 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:38.839 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:38.839 23:05:17 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:38.839 23:05:17 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:38.839 23:05:17 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:38.839 23:05:17 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:38.839 23:05:17 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:38.839 23:05:17 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:38.839 23:05:17 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89848 00:20:38.839 23:05:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89848 ']' 00:20:38.839 23:05:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89848 00:20:38.839 Process with pid 89848 is not found 00:20:38.839 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89848) - No such process 00:20:38.839 23:05:17 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89848 is not found' 00:20:38.839 00:20:38.839 real 1m21.610s 00:20:38.839 user 1m44.751s 00:20:38.839 sys 0m6.480s 00:20:38.839 23:05:17 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.839 23:05:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:38.839 ************************************ 00:20:38.839 END TEST ftl_trim 00:20:38.839 ************************************ 00:20:38.839 23:05:17 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:38.839 23:05:17 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:38.839 23:05:17 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.839 23:05:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:39.101 ************************************ 00:20:39.101 START TEST ftl_restore 00:20:39.101 ************************************ 00:20:39.101 23:05:17 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:39.101 * Looking for test storage... 00:20:39.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:39.101 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.101 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.101 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.101 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.101 23:05:18 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:39.101 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.101 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.101 --rc genhtml_branch_coverage=1 00:20:39.102 --rc genhtml_function_coverage=1 00:20:39.102 --rc genhtml_legend=1 00:20:39.102 --rc geninfo_all_blocks=1 00:20:39.102 --rc geninfo_unexecuted_blocks=1 00:20:39.102 00:20:39.102 ' 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.102 --rc genhtml_branch_coverage=1 00:20:39.102 --rc genhtml_function_coverage=1 00:20:39.102 --rc genhtml_legend=1 00:20:39.102 --rc geninfo_all_blocks=1 00:20:39.102 --rc geninfo_unexecuted_blocks=1 00:20:39.102 00:20:39.102 ' 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.102 --rc genhtml_branch_coverage=1 00:20:39.102 --rc genhtml_function_coverage=1 00:20:39.102 --rc genhtml_legend=1 00:20:39.102 --rc geninfo_all_blocks=1 00:20:39.102 --rc geninfo_unexecuted_blocks=1 00:20:39.102 00:20:39.102 ' 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.102 --rc genhtml_branch_coverage=1 00:20:39.102 --rc genhtml_function_coverage=1 00:20:39.102 --rc genhtml_legend=1 00:20:39.102 --rc geninfo_all_blocks=1 00:20:39.102 --rc geninfo_unexecuted_blocks=1 00:20:39.102 00:20:39.102 ' 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.RdMI8TXHIQ 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90167 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90167 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 90167 ']' 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.102 23:05:18 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:39.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.102 23:05:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:39.364 [2024-11-26 23:05:18.252916] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:20:39.364 [2024-11-26 23:05:18.253373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90167 ] 00:20:39.364 [2024-11-26 23:05:18.394589] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:39.364 [2024-11-26 23:05:18.425464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.364 [2024-11-26 23:05:18.468022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.306 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.306 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:40.306 23:05:19 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:40.306 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:40.306 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:40.306 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:40.306 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:40.306 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:40.579 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:40.579 { 00:20:40.579 "name": "nvme0n1", 00:20:40.579 "aliases": [ 00:20:40.579 "08a3b6ae-2d34-4f30-ac86-2e9a3ea7b5dc" 00:20:40.579 ], 00:20:40.579 "product_name": "NVMe disk", 00:20:40.579 "block_size": 4096, 00:20:40.579 "num_blocks": 1310720, 00:20:40.579 "uuid": "08a3b6ae-2d34-4f30-ac86-2e9a3ea7b5dc", 00:20:40.579 "numa_id": -1, 00:20:40.579 "assigned_rate_limits": { 00:20:40.579 "rw_ios_per_sec": 0, 00:20:40.579 "rw_mbytes_per_sec": 0, 00:20:40.579 "r_mbytes_per_sec": 0, 00:20:40.579 "w_mbytes_per_sec": 0 00:20:40.579 }, 00:20:40.579 "claimed": true, 00:20:40.579 "claim_type": "read_many_write_one", 00:20:40.579 "zoned": false, 00:20:40.579 "supported_io_types": { 00:20:40.579 "read": true, 00:20:40.579 "write": true, 00:20:40.579 "unmap": true, 00:20:40.579 "flush": true, 00:20:40.579 "reset": true, 00:20:40.579 "nvme_admin": true, 00:20:40.579 "nvme_io": true, 00:20:40.579 "nvme_io_md": false, 00:20:40.579 "write_zeroes": true, 00:20:40.579 "zcopy": false, 00:20:40.579 "get_zone_info": false, 00:20:40.579 "zone_management": false, 00:20:40.579 "zone_append": false, 00:20:40.579 "compare": true, 00:20:40.579 "compare_and_write": false, 00:20:40.579 "abort": true, 00:20:40.579 "seek_hole": false, 00:20:40.579 "seek_data": false, 00:20:40.579 "copy": true, 00:20:40.579 "nvme_iov_md": false 00:20:40.579 }, 00:20:40.579 "driver_specific": { 00:20:40.579 "nvme": [ 00:20:40.579 { 00:20:40.579 "pci_address": "0000:00:11.0", 00:20:40.579 "trid": { 00:20:40.579 "trtype": "PCIe", 00:20:40.579 "traddr": "0000:00:11.0" 00:20:40.579 }, 00:20:40.579 "ctrlr_data": { 00:20:40.579 "cntlid": 0, 00:20:40.579 "vendor_id": "0x1b36", 00:20:40.579 "model_number": "QEMU NVMe Ctrl", 00:20:40.579 "serial_number": "12341", 00:20:40.579 "firmware_revision": "8.0.0", 00:20:40.579 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:40.579 "oacs": { 00:20:40.579 "security": 0, 00:20:40.579 "format": 1, 00:20:40.579 "firmware": 0, 00:20:40.579 "ns_manage": 1 00:20:40.579 }, 00:20:40.580 "multi_ctrlr": false, 00:20:40.580 "ana_reporting": false 00:20:40.580 }, 00:20:40.580 "vs": { 00:20:40.580 "nvme_version": "1.4" 00:20:40.580 }, 00:20:40.580 "ns_data": { 00:20:40.580 "id": 1, 00:20:40.580 "can_share": false 00:20:40.580 } 00:20:40.580 } 00:20:40.580 ], 00:20:40.580 "mp_policy": "active_passive" 00:20:40.580 } 00:20:40.580 } 00:20:40.580 ]' 00:20:40.580 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:40.580 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:40.580 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:40.580 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:40.580 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:40.580 23:05:19 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:40.580 23:05:19 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:40.580 23:05:19 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:40.580 23:05:19 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:40.580 23:05:19 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:40.580 23:05:19 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:40.845 23:05:19 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b40ef967-a999-4c9c-9fd3-ba4bec159d18 00:20:40.845 23:05:19 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:40.845 23:05:19 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b40ef967-a999-4c9c-9fd3-ba4bec159d18 00:20:41.106 23:05:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:41.369 23:05:20 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=93afc4aa-3bf9-46e4-83ab-670e9550127e 00:20:41.369 23:05:20 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 93afc4aa-3bf9-46e4-83ab-670e9550127e 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=584ef9db-4977-4352-9e91-f817346aeae5 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 584ef9db-4977-4352-9e91-f817346aeae5 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=584ef9db-4977-4352-9e91-f817346aeae5 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:41.632 23:05:20 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 584ef9db-4977-4352-9e91-f817346aeae5 00:20:41.632 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=584ef9db-4977-4352-9e91-f817346aeae5 00:20:41.632 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:41.632 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:41.632 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:41.632 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 584ef9db-4977-4352-9e91-f817346aeae5 00:20:41.907 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:41.907 { 00:20:41.907 "name": "584ef9db-4977-4352-9e91-f817346aeae5", 00:20:41.907 "aliases": [ 00:20:41.907 "lvs/nvme0n1p0" 00:20:41.907 ], 00:20:41.907 "product_name": "Logical Volume", 00:20:41.907 "block_size": 4096, 00:20:41.907 "num_blocks": 26476544, 00:20:41.907 "uuid": "584ef9db-4977-4352-9e91-f817346aeae5", 00:20:41.907 "assigned_rate_limits": { 00:20:41.907 "rw_ios_per_sec": 0, 00:20:41.907 "rw_mbytes_per_sec": 0, 00:20:41.907 "r_mbytes_per_sec": 0, 00:20:41.907 "w_mbytes_per_sec": 0 00:20:41.907 }, 00:20:41.907 "claimed": false, 00:20:41.907 "zoned": false, 00:20:41.907 "supported_io_types": { 00:20:41.907 "read": true, 00:20:41.907 "write": true, 00:20:41.907 "unmap": true, 00:20:41.907 "flush": false, 00:20:41.907 "reset": true, 00:20:41.907 "nvme_admin": false, 00:20:41.907 "nvme_io": false, 00:20:41.907 "nvme_io_md": false, 00:20:41.907 "write_zeroes": true, 00:20:41.907 "zcopy": false, 00:20:41.907 "get_zone_info": false, 00:20:41.907 "zone_management": false, 00:20:41.907 "zone_append": false, 00:20:41.907 "compare": false, 00:20:41.907 "compare_and_write": false, 00:20:41.907 "abort": false, 00:20:41.907 "seek_hole": true, 00:20:41.907 "seek_data": true, 00:20:41.907 "copy": false, 00:20:41.907 "nvme_iov_md": false 00:20:41.907 }, 00:20:41.907 "driver_specific": { 00:20:41.907 "lvol": { 00:20:41.907 "lvol_store_uuid": "93afc4aa-3bf9-46e4-83ab-670e9550127e", 00:20:41.907 "base_bdev": "nvme0n1", 00:20:41.907 "thin_provision": true, 00:20:41.907 "num_allocated_clusters": 0, 00:20:41.907 "snapshot": false, 00:20:41.907 "clone": false, 00:20:41.907 "esnap_clone": false 00:20:41.907 } 00:20:41.907 } 00:20:41.907 } 00:20:41.907 ]' 00:20:41.907 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:41.907 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:41.907 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:41.908 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:41.908 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:41.908 23:05:20 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:41.908 23:05:20 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:41.908 23:05:20 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:41.908 23:05:20 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:42.174 23:05:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:42.174 23:05:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:42.174 23:05:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 584ef9db-4977-4352-9e91-f817346aeae5 00:20:42.174 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=584ef9db-4977-4352-9e91-f817346aeae5 00:20:42.174 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:42.174 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:42.174 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:42.174 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 584ef9db-4977-4352-9e91-f817346aeae5 00:20:42.432 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:42.432 { 00:20:42.432 "name": "584ef9db-4977-4352-9e91-f817346aeae5", 00:20:42.432 "aliases": [ 00:20:42.432 "lvs/nvme0n1p0" 00:20:42.432 ], 00:20:42.432 "product_name": "Logical Volume", 00:20:42.432 "block_size": 4096, 00:20:42.432 "num_blocks": 26476544, 00:20:42.432 "uuid": "584ef9db-4977-4352-9e91-f817346aeae5", 00:20:42.432 "assigned_rate_limits": { 00:20:42.432 "rw_ios_per_sec": 0, 00:20:42.432 "rw_mbytes_per_sec": 0, 00:20:42.432 "r_mbytes_per_sec": 0, 00:20:42.432 "w_mbytes_per_sec": 0 00:20:42.432 }, 00:20:42.432 "claimed": false, 00:20:42.432 "zoned": false, 00:20:42.432 "supported_io_types": { 00:20:42.432 "read": true, 00:20:42.432 "write": true, 00:20:42.432 "unmap": true, 00:20:42.432 "flush": false, 00:20:42.432 "reset": true, 00:20:42.432 "nvme_admin": false, 00:20:42.432 "nvme_io": false, 00:20:42.432 "nvme_io_md": false, 00:20:42.432 "write_zeroes": true, 00:20:42.432 "zcopy": false, 00:20:42.432 "get_zone_info": false, 00:20:42.432 "zone_management": false, 00:20:42.432 "zone_append": false, 00:20:42.432 "compare": false, 00:20:42.432 "compare_and_write": false, 00:20:42.432 "abort": false, 00:20:42.432 "seek_hole": true, 00:20:42.432 "seek_data": true, 00:20:42.432 "copy": false, 00:20:42.432 "nvme_iov_md": false 00:20:42.432 }, 00:20:42.432 "driver_specific": { 00:20:42.432 "lvol": { 00:20:42.432 "lvol_store_uuid": "93afc4aa-3bf9-46e4-83ab-670e9550127e", 00:20:42.432 "base_bdev": "nvme0n1", 00:20:42.432 "thin_provision": true, 00:20:42.432 "num_allocated_clusters": 0, 00:20:42.432 "snapshot": false, 00:20:42.432 "clone": false, 00:20:42.432 "esnap_clone": false 00:20:42.432 } 00:20:42.432 } 00:20:42.432 } 00:20:42.432 ]' 00:20:42.432 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:42.432 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:42.432 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:42.432 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:42.432 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:42.432 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:42.432 23:05:21 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:42.432 23:05:21 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:42.690 23:05:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:42.690 23:05:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 584ef9db-4977-4352-9e91-f817346aeae5 00:20:42.690 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=584ef9db-4977-4352-9e91-f817346aeae5 00:20:42.690 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:42.690 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:42.690 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:42.690 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 584ef9db-4977-4352-9e91-f817346aeae5 00:20:42.949 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:42.949 { 00:20:42.949 "name": "584ef9db-4977-4352-9e91-f817346aeae5", 00:20:42.949 "aliases": [ 00:20:42.949 "lvs/nvme0n1p0" 00:20:42.949 ], 00:20:42.949 "product_name": "Logical Volume", 00:20:42.949 "block_size": 4096, 00:20:42.949 "num_blocks": 26476544, 00:20:42.949 "uuid": "584ef9db-4977-4352-9e91-f817346aeae5", 00:20:42.949 "assigned_rate_limits": { 00:20:42.949 "rw_ios_per_sec": 0, 00:20:42.949 "rw_mbytes_per_sec": 0, 00:20:42.949 "r_mbytes_per_sec": 0, 00:20:42.949 "w_mbytes_per_sec": 0 00:20:42.949 }, 00:20:42.949 "claimed": false, 00:20:42.949 "zoned": false, 00:20:42.949 "supported_io_types": { 00:20:42.949 "read": true, 00:20:42.949 "write": true, 00:20:42.949 "unmap": true, 00:20:42.949 "flush": false, 00:20:42.949 "reset": true, 00:20:42.949 "nvme_admin": false, 00:20:42.949 "nvme_io": false, 00:20:42.949 "nvme_io_md": false, 00:20:42.949 "write_zeroes": true, 00:20:42.949 "zcopy": false, 00:20:42.949 "get_zone_info": false, 00:20:42.949 "zone_management": false, 00:20:42.949 "zone_append": false, 00:20:42.950 "compare": false, 00:20:42.950 "compare_and_write": false, 00:20:42.950 "abort": false, 00:20:42.950 "seek_hole": true, 00:20:42.950 "seek_data": true, 00:20:42.950 "copy": false, 00:20:42.950 "nvme_iov_md": false 00:20:42.950 }, 00:20:42.950 "driver_specific": { 00:20:42.950 "lvol": { 00:20:42.950 "lvol_store_uuid": "93afc4aa-3bf9-46e4-83ab-670e9550127e", 00:20:42.950 "base_bdev": "nvme0n1", 00:20:42.950 "thin_provision": true, 00:20:42.950 "num_allocated_clusters": 0, 00:20:42.950 "snapshot": false, 00:20:42.950 "clone": false, 00:20:42.950 "esnap_clone": false 00:20:42.950 } 00:20:42.950 } 00:20:42.950 } 00:20:42.950 ]' 00:20:42.950 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:42.950 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:42.950 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:42.950 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:42.950 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:42.950 23:05:21 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:42.950 23:05:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:42.950 23:05:21 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 584ef9db-4977-4352-9e91-f817346aeae5 --l2p_dram_limit 10' 00:20:42.950 23:05:21 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:42.950 23:05:21 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:42.950 23:05:21 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:42.950 23:05:21 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:42.950 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:42.950 23:05:21 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 584ef9db-4977-4352-9e91-f817346aeae5 --l2p_dram_limit 10 -c nvc0n1p0 00:20:43.213 [2024-11-26 23:05:22.076242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.076386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:43.213 [2024-11-26 23:05:22.076407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:43.213 [2024-11-26 23:05:22.076415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.076481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.076495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.213 [2024-11-26 23:05:22.076509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:43.213 [2024-11-26 23:05:22.076515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.076534] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:43.213 [2024-11-26 23:05:22.076780] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:43.213 [2024-11-26 23:05:22.076794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.076800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.213 [2024-11-26 23:05:22.076811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:43.213 [2024-11-26 23:05:22.076817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.076845] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 30f64c7f-41c7-49e6-993d-bc2c7c41b945 00:20:43.213 [2024-11-26 23:05:22.078166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.078198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:43.213 [2024-11-26 23:05:22.078207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:43.213 [2024-11-26 23:05:22.078215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.085276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.085315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.213 [2024-11-26 23:05:22.085324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.997 ms 00:20:43.213 [2024-11-26 23:05:22.085338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.085414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.085428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.213 [2024-11-26 23:05:22.085434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:43.213 [2024-11-26 23:05:22.085444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.085485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.085495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:43.213 [2024-11-26 23:05:22.085502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:43.213 [2024-11-26 23:05:22.085509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.085531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:43.213 [2024-11-26 23:05:22.087220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.087247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.213 [2024-11-26 23:05:22.087256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:20:43.213 [2024-11-26 23:05:22.087265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.087307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.087314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:43.213 [2024-11-26 23:05:22.087325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:43.213 [2024-11-26 23:05:22.087330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.087346] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:43.213 [2024-11-26 23:05:22.087476] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:43.213 [2024-11-26 23:05:22.087489] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:43.213 [2024-11-26 23:05:22.087498] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:43.213 [2024-11-26 23:05:22.087516] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:43.213 [2024-11-26 23:05:22.087523] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:43.213 [2024-11-26 23:05:22.087536] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:43.213 [2024-11-26 23:05:22.087541] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:43.213 [2024-11-26 23:05:22.087552] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:43.213 [2024-11-26 23:05:22.087558] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:43.213 [2024-11-26 23:05:22.087568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.087574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:43.213 [2024-11-26 23:05:22.087582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:43.213 [2024-11-26 23:05:22.087588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.087658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.213 [2024-11-26 23:05:22.087664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:43.213 [2024-11-26 23:05:22.087673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:43.213 [2024-11-26 23:05:22.087679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.213 [2024-11-26 23:05:22.087753] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:43.213 [2024-11-26 23:05:22.087761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:43.213 [2024-11-26 23:05:22.087768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.213 [2024-11-26 23:05:22.087774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.213 [2024-11-26 23:05:22.087782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:43.213 [2024-11-26 23:05:22.087787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:43.213 [2024-11-26 23:05:22.087795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:43.213 [2024-11-26 23:05:22.087800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:43.213 [2024-11-26 23:05:22.087806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:43.213 [2024-11-26 23:05:22.087811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.213 [2024-11-26 23:05:22.087818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:43.213 [2024-11-26 23:05:22.087823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:43.213 [2024-11-26 23:05:22.087831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.213 [2024-11-26 23:05:22.087835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:43.213 [2024-11-26 23:05:22.087842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:43.213 [2024-11-26 23:05:22.087847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.213 [2024-11-26 23:05:22.087854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:43.214 [2024-11-26 23:05:22.087859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:43.214 [2024-11-26 23:05:22.087867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.214 [2024-11-26 23:05:22.087873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:43.214 [2024-11-26 23:05:22.087880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:43.214 [2024-11-26 23:05:22.087886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.214 [2024-11-26 23:05:22.087894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:43.214 [2024-11-26 23:05:22.087901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:43.214 [2024-11-26 23:05:22.087908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.214 [2024-11-26 23:05:22.087914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:43.214 [2024-11-26 23:05:22.087921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:43.214 [2024-11-26 23:05:22.087927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.214 [2024-11-26 23:05:22.087936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:43.214 [2024-11-26 23:05:22.087943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:43.214 [2024-11-26 23:05:22.087952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.214 [2024-11-26 23:05:22.087958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:43.214 [2024-11-26 23:05:22.087966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:43.214 [2024-11-26 23:05:22.087972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.214 [2024-11-26 23:05:22.087980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:43.214 [2024-11-26 23:05:22.087986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:43.214 [2024-11-26 23:05:22.087993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.214 [2024-11-26 23:05:22.087999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:43.214 [2024-11-26 23:05:22.088006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:43.214 [2024-11-26 23:05:22.088012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.214 [2024-11-26 23:05:22.088020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:43.214 [2024-11-26 23:05:22.088026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:43.214 [2024-11-26 23:05:22.088033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.214 [2024-11-26 23:05:22.088039] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:43.214 [2024-11-26 23:05:22.088048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:43.214 [2024-11-26 23:05:22.088055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.214 [2024-11-26 23:05:22.088063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.214 [2024-11-26 23:05:22.088072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:43.214 [2024-11-26 23:05:22.088079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:43.214 [2024-11-26 23:05:22.088085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:43.214 [2024-11-26 23:05:22.088093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:43.214 [2024-11-26 23:05:22.088099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:43.214 [2024-11-26 23:05:22.088106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:43.214 [2024-11-26 23:05:22.088115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:43.214 [2024-11-26 23:05:22.088125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.214 [2024-11-26 23:05:22.088136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:43.214 [2024-11-26 23:05:22.088144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:43.214 [2024-11-26 23:05:22.088151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:43.214 [2024-11-26 23:05:22.088158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:43.214 [2024-11-26 23:05:22.088165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:43.214 [2024-11-26 23:05:22.088174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:43.214 [2024-11-26 23:05:22.088181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:43.214 [2024-11-26 23:05:22.088189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:43.214 [2024-11-26 23:05:22.088196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:43.214 [2024-11-26 23:05:22.088205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:43.214 [2024-11-26 23:05:22.088211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:43.214 [2024-11-26 23:05:22.088219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:43.214 [2024-11-26 23:05:22.088226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:43.214 [2024-11-26 23:05:22.088234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:43.214 [2024-11-26 23:05:22.088241] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:43.214 [2024-11-26 23:05:22.088250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.214 [2024-11-26 23:05:22.088257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:43.214 [2024-11-26 23:05:22.088265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:43.214 [2024-11-26 23:05:22.088270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:43.214 [2024-11-26 23:05:22.088277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:43.214 [2024-11-26 23:05:22.088283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.214 [2024-11-26 23:05:22.088292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:43.214 [2024-11-26 23:05:22.088318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:20:43.214 [2024-11-26 23:05:22.088325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.214 [2024-11-26 23:05:22.088364] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:43.214 [2024-11-26 23:05:22.088374] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:46.513 [2024-11-26 23:05:25.405548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.405621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:46.513 [2024-11-26 23:05:25.405637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3317.172 ms 00:20:46.513 [2024-11-26 23:05:25.405649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.417353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.417526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.513 [2024-11-26 23:05:25.417546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.611 ms 00:20:46.513 [2024-11-26 23:05:25.417568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.417686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.417699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:46.513 [2024-11-26 23:05:25.417708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:46.513 [2024-11-26 23:05:25.417718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.429151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.429281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.513 [2024-11-26 23:05:25.429312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.396 ms 00:20:46.513 [2024-11-26 23:05:25.429323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.429355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.429366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.513 [2024-11-26 23:05:25.429374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.513 [2024-11-26 23:05:25.429383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.429819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.429840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.513 [2024-11-26 23:05:25.429850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:20:46.513 [2024-11-26 23:05:25.429866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.429975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.429987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.513 [2024-11-26 23:05:25.429996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:46.513 [2024-11-26 23:05:25.430007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.437166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.437201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.513 [2024-11-26 23:05:25.437211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.140 ms 00:20:46.513 [2024-11-26 23:05:25.437221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.456439] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:46.513 [2024-11-26 23:05:25.460120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.460274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:46.513 [2024-11-26 23:05:25.460316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.824 ms 00:20:46.513 [2024-11-26 23:05:25.460328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.526201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.526499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:46.513 [2024-11-26 23:05:25.526544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.820 ms 00:20:46.513 [2024-11-26 23:05:25.526562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.526895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.526916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:46.513 [2024-11-26 23:05:25.526936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:20:46.513 [2024-11-26 23:05:25.526949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.513 [2024-11-26 23:05:25.530853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.513 [2024-11-26 23:05:25.530892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:46.514 [2024-11-26 23:05:25.530905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.843 ms 00:20:46.514 [2024-11-26 23:05:25.530913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.534509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.534544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:46.514 [2024-11-26 23:05:25.534556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:20:46.514 [2024-11-26 23:05:25.534563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.534895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.534904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:46.514 [2024-11-26 23:05:25.534917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:46.514 [2024-11-26 23:05:25.534925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.571634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.571676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:46.514 [2024-11-26 23:05:25.571695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.686 ms 00:20:46.514 [2024-11-26 23:05:25.571703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.577368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.577506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:46.514 [2024-11-26 23:05:25.577528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.597 ms 00:20:46.514 [2024-11-26 23:05:25.577537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.581145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.581276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:46.514 [2024-11-26 23:05:25.581310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:20:46.514 [2024-11-26 23:05:25.581318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.585967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.586004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:46.514 [2024-11-26 23:05:25.586019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:20:46.514 [2024-11-26 23:05:25.586026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.586069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.586079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:46.514 [2024-11-26 23:05:25.586090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:46.514 [2024-11-26 23:05:25.586098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.586178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-11-26 23:05:25.586187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:46.514 [2024-11-26 23:05:25.586200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:46.514 [2024-11-26 23:05:25.586207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-11-26 23:05:25.587274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3510.576 ms, result 0 00:20:46.514 { 00:20:46.514 "name": "ftl0", 00:20:46.514 "uuid": "30f64c7f-41c7-49e6-993d-bc2c7c41b945" 00:20:46.514 } 00:20:46.514 23:05:25 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:46.514 23:05:25 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:46.775 23:05:25 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:46.775 23:05:25 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:47.038 [2024-11-26 23:05:25.996054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:25.996105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:47.038 [2024-11-26 23:05:25.996119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:47.038 [2024-11-26 23:05:25.996129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:25.996153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:47.038 [2024-11-26 23:05:25.996775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:25.996798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:47.038 [2024-11-26 23:05:25.996810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:20:47.038 [2024-11-26 23:05:25.996818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:25.997076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:25.997089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:47.038 [2024-11-26 23:05:25.997103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:20:47.038 [2024-11-26 23:05:25.997112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.000363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.000386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:47.038 [2024-11-26 23:05:26.000398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:20:47.038 [2024-11-26 23:05:26.000405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.006693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.006812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:47.038 [2024-11-26 23:05:26.006835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.263 ms 00:20:47.038 [2024-11-26 23:05:26.006843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.009062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.009096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:47.038 [2024-11-26 23:05:26.009108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:20:47.038 [2024-11-26 23:05:26.009115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.013975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.014097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:47.038 [2024-11-26 23:05:26.014117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.821 ms 00:20:47.038 [2024-11-26 23:05:26.014126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.014246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.014267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:47.038 [2024-11-26 23:05:26.014278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:47.038 [2024-11-26 23:05:26.014286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.016780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.016815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:47.038 [2024-11-26 23:05:26.016827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.458 ms 00:20:47.038 [2024-11-26 23:05:26.016834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.018864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.018897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:47.038 [2024-11-26 23:05:26.018907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:20:47.038 [2024-11-26 23:05:26.018914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.020843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.020875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:47.038 [2024-11-26 23:05:26.020886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:20:47.038 [2024-11-26 23:05:26.020892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.022339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.038 [2024-11-26 23:05:26.022369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:47.038 [2024-11-26 23:05:26.022379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:20:47.038 [2024-11-26 23:05:26.022386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.038 [2024-11-26 23:05:26.022419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:47.038 [2024-11-26 23:05:26.022433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:47.038 [2024-11-26 23:05:26.022606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.022997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:47.039 [2024-11-26 23:05:26.023353] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:47.039 [2024-11-26 23:05:26.023363] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 30f64c7f-41c7-49e6-993d-bc2c7c41b945 00:20:47.039 [2024-11-26 23:05:26.023371] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:47.039 [2024-11-26 23:05:26.023380] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:47.039 [2024-11-26 23:05:26.023388] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:47.039 [2024-11-26 23:05:26.023400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:47.039 [2024-11-26 23:05:26.023408] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:47.039 [2024-11-26 23:05:26.023419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:47.039 [2024-11-26 23:05:26.023427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:47.039 [2024-11-26 23:05:26.023435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:47.039 [2024-11-26 23:05:26.023441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:47.039 [2024-11-26 23:05:26.023451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.040 [2024-11-26 23:05:26.023459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:47.040 [2024-11-26 23:05:26.023469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:20:47.040 [2024-11-26 23:05:26.023480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.025479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.040 [2024-11-26 23:05:26.025507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:47.040 [2024-11-26 23:05:26.025523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:20:47.040 [2024-11-26 23:05:26.025532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.025645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.040 [2024-11-26 23:05:26.025655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:47.040 [2024-11-26 23:05:26.025666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:47.040 [2024-11-26 23:05:26.025675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.032756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.032795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.040 [2024-11-26 23:05:26.032806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.032814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.032875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.032883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.040 [2024-11-26 23:05:26.032894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.032901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.032978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.032988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.040 [2024-11-26 23:05:26.033001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.033008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.033027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.033039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.040 [2024-11-26 23:05:26.033048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.033059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.046364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.046545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.040 [2024-11-26 23:05:26.046567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.046576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.057180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.040 [2024-11-26 23:05:26.057193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.057202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.057312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.040 [2024-11-26 23:05:26.057323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.057334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.057397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.040 [2024-11-26 23:05:26.057407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.057415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.057500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.040 [2024-11-26 23:05:26.057510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.057517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.057572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:47.040 [2024-11-26 23:05:26.057582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.057594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.057649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.040 [2024-11-26 23:05:26.057664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.057673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.040 [2024-11-26 23:05:26.057734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.040 [2024-11-26 23:05:26.057744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.040 [2024-11-26 23:05:26.057752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.040 [2024-11-26 23:05:26.057910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.796 ms, result 0 00:20:47.040 true 00:20:47.040 23:05:26 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90167 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90167 ']' 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90167 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90167 00:20:47.040 killing process with pid 90167 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90167' 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 90167 00:20:47.040 23:05:26 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 90167 00:20:52.326 23:05:31 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:56.549 262144+0 records in 00:20:56.549 262144+0 records out 00:20:56.549 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.21333 s, 255 MB/s 00:20:56.549 23:05:35 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:58.472 23:05:37 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:58.472 [2024-11-26 23:05:37.321176] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:20:58.472 [2024-11-26 23:05:37.321318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90370 ] 00:20:58.472 [2024-11-26 23:05:37.454398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:58.472 [2024-11-26 23:05:37.479746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.472 [2024-11-26 23:05:37.504619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.735 [2024-11-26 23:05:37.610577] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.735 [2024-11-26 23:05:37.610668] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.735 [2024-11-26 23:05:37.768201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.768247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.735 [2024-11-26 23:05:37.768265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.735 [2024-11-26 23:05:37.768273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.768342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.768357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.735 [2024-11-26 23:05:37.768365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:58.735 [2024-11-26 23:05:37.768378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.768399] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.735 [2024-11-26 23:05:37.768653] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.735 [2024-11-26 23:05:37.768671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.768683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.735 [2024-11-26 23:05:37.768691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:58.735 [2024-11-26 23:05:37.768699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.770171] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:58.735 [2024-11-26 23:05:37.773515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.773550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:58.735 [2024-11-26 23:05:37.773568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:20:58.735 [2024-11-26 23:05:37.773580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.773650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.773660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:58.735 [2024-11-26 23:05:37.773669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:58.735 [2024-11-26 23:05:37.773676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.781002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.781035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.735 [2024-11-26 23:05:37.781045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.278 ms 00:20:58.735 [2024-11-26 23:05:37.781053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.781144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.781155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.735 [2024-11-26 23:05:37.781165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:58.735 [2024-11-26 23:05:37.781173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.781216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.781225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.735 [2024-11-26 23:05:37.781235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:58.735 [2024-11-26 23:05:37.781243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.781265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.735 [2024-11-26 23:05:37.783139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.783169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.735 [2024-11-26 23:05:37.783179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.879 ms 00:20:58.735 [2024-11-26 23:05:37.783186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.783219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.783227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.735 [2024-11-26 23:05:37.783239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:58.735 [2024-11-26 23:05:37.783247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.783286] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:58.735 [2024-11-26 23:05:37.783332] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:58.735 [2024-11-26 23:05:37.783377] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:58.735 [2024-11-26 23:05:37.783396] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:58.735 [2024-11-26 23:05:37.783507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.735 [2024-11-26 23:05:37.783521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.735 [2024-11-26 23:05:37.783531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:58.735 [2024-11-26 23:05:37.783546] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.735 [2024-11-26 23:05:37.783555] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.735 [2024-11-26 23:05:37.783564] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:58.735 [2024-11-26 23:05:37.783572] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.735 [2024-11-26 23:05:37.783579] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.735 [2024-11-26 23:05:37.783591] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.735 [2024-11-26 23:05:37.783598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.783605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.735 [2024-11-26 23:05:37.783615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:58.735 [2024-11-26 23:05:37.783624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.783712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.735 [2024-11-26 23:05:37.783721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.735 [2024-11-26 23:05:37.783729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:58.735 [2024-11-26 23:05:37.783739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.735 [2024-11-26 23:05:37.783851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.735 [2024-11-26 23:05:37.783861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.735 [2024-11-26 23:05:37.783869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.735 [2024-11-26 23:05:37.783882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.735 [2024-11-26 23:05:37.783892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.735 [2024-11-26 23:05:37.783899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.735 [2024-11-26 23:05:37.783911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:58.735 [2024-11-26 23:05:37.783919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.735 [2024-11-26 23:05:37.783927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:58.735 [2024-11-26 23:05:37.783936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.735 [2024-11-26 23:05:37.783943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.736 [2024-11-26 23:05:37.783950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:58.736 [2024-11-26 23:05:37.783956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.736 [2024-11-26 23:05:37.783962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.736 [2024-11-26 23:05:37.783969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:58.736 [2024-11-26 23:05:37.783978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.736 [2024-11-26 23:05:37.783985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.736 [2024-11-26 23:05:37.783992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:58.736 [2024-11-26 23:05:37.783999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.736 [2024-11-26 23:05:37.784013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.736 [2024-11-26 23:05:37.784026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.736 [2024-11-26 23:05:37.784033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.736 [2024-11-26 23:05:37.784050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.736 [2024-11-26 23:05:37.784057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.736 [2024-11-26 23:05:37.784070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.736 [2024-11-26 23:05:37.784076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.736 [2024-11-26 23:05:37.784089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.736 [2024-11-26 23:05:37.784096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.736 [2024-11-26 23:05:37.784108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.736 [2024-11-26 23:05:37.784115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:58.736 [2024-11-26 23:05:37.784121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.736 [2024-11-26 23:05:37.784127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.736 [2024-11-26 23:05:37.784133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:58.736 [2024-11-26 23:05:37.784139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.736 [2024-11-26 23:05:37.784156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:58.736 [2024-11-26 23:05:37.784164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784170] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.736 [2024-11-26 23:05:37.784178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.736 [2024-11-26 23:05:37.784185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.736 [2024-11-26 23:05:37.784192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.736 [2024-11-26 23:05:37.784200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.736 [2024-11-26 23:05:37.784208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.736 [2024-11-26 23:05:37.784214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.736 [2024-11-26 23:05:37.784221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.736 [2024-11-26 23:05:37.784227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.736 [2024-11-26 23:05:37.784234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.736 [2024-11-26 23:05:37.784242] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.736 [2024-11-26 23:05:37.784252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.736 [2024-11-26 23:05:37.784260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:58.736 [2024-11-26 23:05:37.784267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:58.736 [2024-11-26 23:05:37.784276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:58.736 [2024-11-26 23:05:37.784284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:58.736 [2024-11-26 23:05:37.784291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:58.736 [2024-11-26 23:05:37.784549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:58.736 [2024-11-26 23:05:37.784581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:58.736 [2024-11-26 23:05:37.784611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:58.736 [2024-11-26 23:05:37.784639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:58.736 [2024-11-26 23:05:37.784667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:58.736 [2024-11-26 23:05:37.784694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:58.736 [2024-11-26 23:05:37.784721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:58.736 [2024-11-26 23:05:37.784748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:58.736 [2024-11-26 23:05:37.784777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:58.736 [2024-11-26 23:05:37.784853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.736 [2024-11-26 23:05:37.784885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.736 [2024-11-26 23:05:37.784914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.736 [2024-11-26 23:05:37.784943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.736 [2024-11-26 23:05:37.784975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.736 [2024-11-26 23:05:37.785004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.736 [2024-11-26 23:05:37.785034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.736 [2024-11-26 23:05:37.785053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.736 [2024-11-26 23:05:37.785072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:20:58.736 [2024-11-26 23:05:37.785094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.736 [2024-11-26 23:05:37.798346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.736 [2024-11-26 23:05:37.798382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.736 [2024-11-26 23:05:37.798395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.170 ms 00:20:58.736 [2024-11-26 23:05:37.798404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.736 [2024-11-26 23:05:37.798493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.736 [2024-11-26 23:05:37.798503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:58.736 [2024-11-26 23:05:37.798513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:58.736 [2024-11-26 23:05:37.798521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.736 [2024-11-26 23:05:37.822477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.736 [2024-11-26 23:05:37.822556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.736 [2024-11-26 23:05:37.822597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.898 ms 00:20:58.737 [2024-11-26 23:05:37.822617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.822729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.822763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.737 [2024-11-26 23:05:37.822795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:58.737 [2024-11-26 23:05:37.822818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.823529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.823580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.737 [2024-11-26 23:05:37.823600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:20:58.737 [2024-11-26 23:05:37.823615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.823876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.823905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.737 [2024-11-26 23:05:37.823923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:20:58.737 [2024-11-26 23:05:37.823939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.832701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.832857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.737 [2024-11-26 23:05:37.832875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.722 ms 00:20:58.737 [2024-11-26 23:05:37.832893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.836394] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:58.737 [2024-11-26 23:05:37.836437] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:58.737 [2024-11-26 23:05:37.836450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.836459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:58.737 [2024-11-26 23:05:37.836469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:20:58.737 [2024-11-26 23:05:37.836475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.851542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.851683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:58.737 [2024-11-26 23:05:37.851701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.023 ms 00:20:58.737 [2024-11-26 23:05:37.851709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.854321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.854359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:58.737 [2024-11-26 23:05:37.854369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:20:58.737 [2024-11-26 23:05:37.854376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.856838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.856875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:58.737 [2024-11-26 23:05:37.856885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.423 ms 00:20:58.737 [2024-11-26 23:05:37.856901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.737 [2024-11-26 23:05:37.857230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.737 [2024-11-26 23:05:37.857242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:58.737 [2024-11-26 23:05:37.857251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:58.737 [2024-11-26 23:05:37.857259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.881206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.881273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:59.000 [2024-11-26 23:05:37.881289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.925 ms 00:20:59.000 [2024-11-26 23:05:37.881326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.890574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:59.000 [2024-11-26 23:05:37.894162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.894205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:59.000 [2024-11-26 23:05:37.894219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.773 ms 00:20:59.000 [2024-11-26 23:05:37.894228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.894330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.894343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:59.000 [2024-11-26 23:05:37.894357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:59.000 [2024-11-26 23:05:37.894366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.894439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.894450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:59.000 [2024-11-26 23:05:37.894463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:59.000 [2024-11-26 23:05:37.894473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.894494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.894503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:59.000 [2024-11-26 23:05:37.894511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:59.000 [2024-11-26 23:05:37.894522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.894560] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:59.000 [2024-11-26 23:05:37.894571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.894580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:59.000 [2024-11-26 23:05:37.894601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:59.000 [2024-11-26 23:05:37.894613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.900158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.900205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:59.000 [2024-11-26 23:05:37.900216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.525 ms 00:20:59.000 [2024-11-26 23:05:37.900224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.900327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.000 [2024-11-26 23:05:37.900342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:59.000 [2024-11-26 23:05:37.900352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:59.000 [2024-11-26 23:05:37.900360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.000 [2024-11-26 23:05:37.901595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.874 ms, result 0 00:20:59.945  [2024-11-26T23:05:40.017Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-26T23:05:40.955Z] Copying: 24/1024 [MB] (14 MBps) [2024-11-26T23:05:42.342Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-26T23:05:42.914Z] Copying: 64/1024 [MB] (14 MBps) [2024-11-26T23:05:44.306Z] Copying: 79/1024 [MB] (14 MBps) [2024-11-26T23:05:45.249Z] Copying: 93/1024 [MB] (14 MBps) [2024-11-26T23:05:46.193Z] Copying: 108/1024 [MB] (14 MBps) [2024-11-26T23:05:47.136Z] Copying: 120/1024 [MB] (12 MBps) [2024-11-26T23:05:48.148Z] Copying: 138/1024 [MB] (17 MBps) [2024-11-26T23:05:49.094Z] Copying: 154/1024 [MB] (15 MBps) [2024-11-26T23:05:50.036Z] Copying: 172/1024 [MB] (18 MBps) [2024-11-26T23:05:50.985Z] Copying: 190/1024 [MB] (18 MBps) [2024-11-26T23:05:51.931Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-26T23:05:53.325Z] Copying: 215728/1048576 [kB] (10072 kBps) [2024-11-26T23:05:54.272Z] Copying: 220/1024 [MB] (10 MBps) [2024-11-26T23:05:55.218Z] Copying: 230/1024 [MB] (10 MBps) [2024-11-26T23:05:56.164Z] Copying: 241/1024 [MB] (10 MBps) [2024-11-26T23:05:57.110Z] Copying: 251/1024 [MB] (10 MBps) [2024-11-26T23:05:58.064Z] Copying: 268/1024 [MB] (16 MBps) [2024-11-26T23:05:59.003Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-26T23:05:59.942Z] Copying: 298/1024 [MB] (18 MBps) [2024-11-26T23:06:01.321Z] Copying: 327/1024 [MB] (29 MBps) [2024-11-26T23:06:02.253Z] Copying: 342/1024 [MB] (15 MBps) [2024-11-26T23:06:03.186Z] Copying: 375/1024 [MB] (32 MBps) [2024-11-26T23:06:04.127Z] Copying: 428/1024 [MB] (53 MBps) [2024-11-26T23:06:05.083Z] Copying: 447/1024 [MB] (19 MBps) [2024-11-26T23:06:06.028Z] Copying: 462/1024 [MB] (14 MBps) [2024-11-26T23:06:06.975Z] Copying: 473/1024 [MB] (10 MBps) [2024-11-26T23:06:07.923Z] Copying: 495072/1048576 [kB] (10232 kBps) [2024-11-26T23:06:09.309Z] Copying: 495/1024 [MB] (12 MBps) [2024-11-26T23:06:10.245Z] Copying: 530/1024 [MB] (34 MBps) [2024-11-26T23:06:11.187Z] Copying: 561/1024 [MB] (30 MBps) [2024-11-26T23:06:12.127Z] Copying: 577/1024 [MB] (16 MBps) [2024-11-26T23:06:13.067Z] Copying: 590/1024 [MB] (12 MBps) [2024-11-26T23:06:14.006Z] Copying: 622/1024 [MB] (32 MBps) [2024-11-26T23:06:14.948Z] Copying: 653/1024 [MB] (30 MBps) [2024-11-26T23:06:16.334Z] Copying: 677/1024 [MB] (24 MBps) [2024-11-26T23:06:16.945Z] Copying: 696/1024 [MB] (19 MBps) [2024-11-26T23:06:18.352Z] Copying: 716/1024 [MB] (20 MBps) [2024-11-26T23:06:18.925Z] Copying: 736/1024 [MB] (19 MBps) [2024-11-26T23:06:20.322Z] Copying: 750/1024 [MB] (13 MBps) [2024-11-26T23:06:21.266Z] Copying: 770/1024 [MB] (20 MBps) [2024-11-26T23:06:22.211Z] Copying: 785/1024 [MB] (14 MBps) [2024-11-26T23:06:23.154Z] Copying: 808/1024 [MB] (22 MBps) [2024-11-26T23:06:24.095Z] Copying: 826/1024 [MB] (18 MBps) [2024-11-26T23:06:25.038Z] Copying: 849/1024 [MB] (23 MBps) [2024-11-26T23:06:25.978Z] Copying: 867/1024 [MB] (17 MBps) [2024-11-26T23:06:27.032Z] Copying: 880/1024 [MB] (13 MBps) [2024-11-26T23:06:27.979Z] Copying: 895/1024 [MB] (15 MBps) [2024-11-26T23:06:28.920Z] Copying: 912/1024 [MB] (17 MBps) [2024-11-26T23:06:30.304Z] Copying: 934/1024 [MB] (21 MBps) [2024-11-26T23:06:31.250Z] Copying: 951/1024 [MB] (17 MBps) [2024-11-26T23:06:32.187Z] Copying: 968/1024 [MB] (17 MBps) [2024-11-26T23:06:33.137Z] Copying: 985/1024 [MB] (17 MBps) [2024-11-26T23:06:34.078Z] Copying: 1004/1024 [MB] (18 MBps) [2024-11-26T23:06:34.078Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-26T23:06:34.078Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-26 23:06:33.917888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.917933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:54.951 [2024-11-26 23:06:33.917946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:54.951 [2024-11-26 23:06:33.917957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.917975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:54.951 [2024-11-26 23:06:33.918571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.918590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:54.951 [2024-11-26 23:06:33.918598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:21:54.951 [2024-11-26 23:06:33.918605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.920158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.920187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:54.951 [2024-11-26 23:06:33.920202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:21:54.951 [2024-11-26 23:06:33.920218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.932610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.932724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:54.951 [2024-11-26 23:06:33.932738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.379 ms 00:21:54.951 [2024-11-26 23:06:33.932746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.937565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.937588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:54.951 [2024-11-26 23:06:33.937597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.795 ms 00:21:54.951 [2024-11-26 23:06:33.937604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.938844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.938873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:54.951 [2024-11-26 23:06:33.938881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:21:54.951 [2024-11-26 23:06:33.938887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.942528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.942557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:54.951 [2024-11-26 23:06:33.942565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:21:54.951 [2024-11-26 23:06:33.942571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.942656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.942664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:54.951 [2024-11-26 23:06:33.942671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:54.951 [2024-11-26 23:06:33.942677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.944522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.944548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:54.951 [2024-11-26 23:06:33.944564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:21:54.951 [2024-11-26 23:06:33.944569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.945735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.945760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:54.951 [2024-11-26 23:06:33.945766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:21:54.951 [2024-11-26 23:06:33.945772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.946876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.946903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:54.951 [2024-11-26 23:06:33.946910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:21:54.951 [2024-11-26 23:06:33.946915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.947923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.951 [2024-11-26 23:06:33.948026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:54.951 [2024-11-26 23:06:33.948038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:21:54.951 [2024-11-26 23:06:33.948044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.951 [2024-11-26 23:06:33.948066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:54.951 [2024-11-26 23:06:33.948078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:54.951 [2024-11-26 23:06:33.948134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:54.952 [2024-11-26 23:06:33.948579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:54.953 [2024-11-26 23:06:33.948696] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:54.953 [2024-11-26 23:06:33.948702] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 30f64c7f-41c7-49e6-993d-bc2c7c41b945 00:21:54.953 [2024-11-26 23:06:33.948709] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:54.953 [2024-11-26 23:06:33.948715] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:54.953 [2024-11-26 23:06:33.948721] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:54.953 [2024-11-26 23:06:33.948727] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:54.953 [2024-11-26 23:06:33.948733] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:54.953 [2024-11-26 23:06:33.948740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:54.953 [2024-11-26 23:06:33.948745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:54.953 [2024-11-26 23:06:33.948750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:54.953 [2024-11-26 23:06:33.948756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:54.953 [2024-11-26 23:06:33.948762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.953 [2024-11-26 23:06:33.948774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:54.953 [2024-11-26 23:06:33.948785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:21:54.953 [2024-11-26 23:06:33.948791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.950647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.953 [2024-11-26 23:06:33.950667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:54.953 [2024-11-26 23:06:33.950681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:21:54.953 [2024-11-26 23:06:33.950688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.950777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.953 [2024-11-26 23:06:33.950784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:54.953 [2024-11-26 23:06:33.950790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:54.953 [2024-11-26 23:06:33.950796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.956600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.956702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:54.953 [2024-11-26 23:06:33.956753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.956772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.956827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.956858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:54.953 [2024-11-26 23:06:33.956874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.956888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.956931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.956958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:54.953 [2024-11-26 23:06:33.957017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.957035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.957057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.957076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:54.953 [2024-11-26 23:06:33.957091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.957106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.967733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.967846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:54.953 [2024-11-26 23:06:33.967888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.967907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.976505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.976624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:54.953 [2024-11-26 23:06:33.976665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.976683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.976764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.976784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:54.953 [2024-11-26 23:06:33.976801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.976815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.976844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.976898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:54.953 [2024-11-26 23:06:33.976922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.976937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.977005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.977013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:54.953 [2024-11-26 23:06:33.977020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.953 [2024-11-26 23:06:33.977030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.953 [2024-11-26 23:06:33.977055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.953 [2024-11-26 23:06:33.977062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:54.954 [2024-11-26 23:06:33.977071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.954 [2024-11-26 23:06:33.977077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.954 [2024-11-26 23:06:33.977112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.954 [2024-11-26 23:06:33.977120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:54.954 [2024-11-26 23:06:33.977127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.954 [2024-11-26 23:06:33.977134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.954 [2024-11-26 23:06:33.977176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.954 [2024-11-26 23:06:33.977185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:54.954 [2024-11-26 23:06:33.977199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.954 [2024-11-26 23:06:33.977205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.954 [2024-11-26 23:06:33.977334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.402 ms, result 0 00:21:55.525 00:21:55.525 00:21:55.525 23:06:34 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:55.525 [2024-11-26 23:06:34.629921] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:21:55.525 [2024-11-26 23:06:34.630195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90959 ] 00:21:55.783 [2024-11-26 23:06:34.763079] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:55.783 [2024-11-26 23:06:34.788628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.783 [2024-11-26 23:06:34.815799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.042 [2024-11-26 23:06:34.917455] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.042 [2024-11-26 23:06:34.917514] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.043 [2024-11-26 23:06:35.071203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.071354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:56.043 [2024-11-26 23:06:35.071373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:56.043 [2024-11-26 23:06:35.071381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.071436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.071446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.043 [2024-11-26 23:06:35.071452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:56.043 [2024-11-26 23:06:35.071461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.071477] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:56.043 [2024-11-26 23:06:35.071660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:56.043 [2024-11-26 23:06:35.071672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.071684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.043 [2024-11-26 23:06:35.071693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:21:56.043 [2024-11-26 23:06:35.071699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.072949] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:56.043 [2024-11-26 23:06:35.075520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.075550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:56.043 [2024-11-26 23:06:35.075565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:21:56.043 [2024-11-26 23:06:35.075571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.075630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.075638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:56.043 [2024-11-26 23:06:35.075645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:56.043 [2024-11-26 23:06:35.075654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.082025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.082055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.043 [2024-11-26 23:06:35.082064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.331 ms 00:21:56.043 [2024-11-26 23:06:35.082070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.082139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.082148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.043 [2024-11-26 23:06:35.082155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:56.043 [2024-11-26 23:06:35.082163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.082201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.082210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:56.043 [2024-11-26 23:06:35.082220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:56.043 [2024-11-26 23:06:35.082225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.082243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:56.043 [2024-11-26 23:06:35.083826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.083853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.043 [2024-11-26 23:06:35.083861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:21:56.043 [2024-11-26 23:06:35.083868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.083893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.083901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:56.043 [2024-11-26 23:06:35.083912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:56.043 [2024-11-26 23:06:35.083919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.083939] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:56.043 [2024-11-26 23:06:35.083958] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:56.043 [2024-11-26 23:06:35.083987] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:56.043 [2024-11-26 23:06:35.084001] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:56.043 [2024-11-26 23:06:35.084087] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:56.043 [2024-11-26 23:06:35.084102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:56.043 [2024-11-26 23:06:35.084110] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:56.043 [2024-11-26 23:06:35.084118] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084130] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:56.043 [2024-11-26 23:06:35.084136] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:56.043 [2024-11-26 23:06:35.084142] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:56.043 [2024-11-26 23:06:35.084150] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:56.043 [2024-11-26 23:06:35.084156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.084161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:56.043 [2024-11-26 23:06:35.084167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:21:56.043 [2024-11-26 23:06:35.084179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.084241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.043 [2024-11-26 23:06:35.084247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:56.043 [2024-11-26 23:06:35.084253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:56.043 [2024-11-26 23:06:35.084258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.043 [2024-11-26 23:06:35.084365] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:56.043 [2024-11-26 23:06:35.084374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:56.043 [2024-11-26 23:06:35.084381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:56.043 [2024-11-26 23:06:35.084400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:56.043 [2024-11-26 23:06:35.084421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.043 [2024-11-26 23:06:35.084433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:56.043 [2024-11-26 23:06:35.084438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:56.043 [2024-11-26 23:06:35.084444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.043 [2024-11-26 23:06:35.084449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:56.043 [2024-11-26 23:06:35.084454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:56.043 [2024-11-26 23:06:35.084459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:56.043 [2024-11-26 23:06:35.084469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:56.043 [2024-11-26 23:06:35.084488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:56.043 [2024-11-26 23:06:35.084504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:56.043 [2024-11-26 23:06:35.084522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:56.043 [2024-11-26 23:06:35.084538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.043 [2024-11-26 23:06:35.084547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:56.043 [2024-11-26 23:06:35.084552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.043 [2024-11-26 23:06:35.084563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:56.043 [2024-11-26 23:06:35.084567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:56.043 [2024-11-26 23:06:35.084572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.043 [2024-11-26 23:06:35.084577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:56.043 [2024-11-26 23:06:35.084582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:56.043 [2024-11-26 23:06:35.084587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.043 [2024-11-26 23:06:35.084593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:56.043 [2024-11-26 23:06:35.084598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:56.044 [2024-11-26 23:06:35.084607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.044 [2024-11-26 23:06:35.084612] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:56.044 [2024-11-26 23:06:35.084617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:56.044 [2024-11-26 23:06:35.084625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.044 [2024-11-26 23:06:35.084630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.044 [2024-11-26 23:06:35.084636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:56.044 [2024-11-26 23:06:35.084641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:56.044 [2024-11-26 23:06:35.084646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:56.044 [2024-11-26 23:06:35.084654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:56.044 [2024-11-26 23:06:35.084659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:56.044 [2024-11-26 23:06:35.084664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:56.044 [2024-11-26 23:06:35.084671] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:56.044 [2024-11-26 23:06:35.084678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.044 [2024-11-26 23:06:35.084685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:56.044 [2024-11-26 23:06:35.084690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:56.044 [2024-11-26 23:06:35.084695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:56.044 [2024-11-26 23:06:35.084702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:56.044 [2024-11-26 23:06:35.084707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:56.044 [2024-11-26 23:06:35.084713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:56.044 [2024-11-26 23:06:35.084718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:56.044 [2024-11-26 23:06:35.084723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:56.044 [2024-11-26 23:06:35.084729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:56.044 [2024-11-26 23:06:35.084734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:56.044 [2024-11-26 23:06:35.084740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:56.044 [2024-11-26 23:06:35.084745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:56.044 [2024-11-26 23:06:35.084750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:56.044 [2024-11-26 23:06:35.084755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:56.044 [2024-11-26 23:06:35.084761] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:56.044 [2024-11-26 23:06:35.084767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.044 [2024-11-26 23:06:35.084774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:56.044 [2024-11-26 23:06:35.084779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:56.044 [2024-11-26 23:06:35.084784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:56.044 [2024-11-26 23:06:35.084791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:56.044 [2024-11-26 23:06:35.084796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.084802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:56.044 [2024-11-26 23:06:35.084808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:21:56.044 [2024-11-26 23:06:35.084815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.096522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.096644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.044 [2024-11-26 23:06:35.096695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.668 ms 00:21:56.044 [2024-11-26 23:06:35.096720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.096807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.096831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:56.044 [2024-11-26 23:06:35.096848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:56.044 [2024-11-26 23:06:35.096862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.114033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.114200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.044 [2024-11-26 23:06:35.114422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.111 ms 00:21:56.044 [2024-11-26 23:06:35.114459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.114522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.114642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.044 [2024-11-26 23:06:35.114673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:56.044 [2024-11-26 23:06:35.114709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.115228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.115355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.044 [2024-11-26 23:06:35.115418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:21:56.044 [2024-11-26 23:06:35.115447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.115631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.115784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.044 [2024-11-26 23:06:35.115822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:21:56.044 [2024-11-26 23:06:35.115846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.122954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.123043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.044 [2024-11-26 23:06:35.123080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.035 ms 00:21:56.044 [2024-11-26 23:06:35.123105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.125897] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:56.044 [2024-11-26 23:06:35.125998] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:56.044 [2024-11-26 23:06:35.126051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.126067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:56.044 [2024-11-26 23:06:35.126083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:21:56.044 [2024-11-26 23:06:35.126098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.137771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.137885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:56.044 [2024-11-26 23:06:35.137932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.480 ms 00:21:56.044 [2024-11-26 23:06:35.137952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.139902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.139998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:56.044 [2024-11-26 23:06:35.140038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:21:56.044 [2024-11-26 23:06:35.140055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.141673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.141757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:56.044 [2024-11-26 23:06:35.141796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:21:56.044 [2024-11-26 23:06:35.141820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.142080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.142186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:56.044 [2024-11-26 23:06:35.142208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:21:56.044 [2024-11-26 23:06:35.142228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.159457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.044 [2024-11-26 23:06:35.159586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:56.044 [2024-11-26 23:06:35.159629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.201 ms 00:21:56.044 [2024-11-26 23:06:35.159648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.044 [2024-11-26 23:06:35.165630] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:56.303 [2024-11-26 23:06:35.168257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.303 [2024-11-26 23:06:35.168355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:56.303 [2024-11-26 23:06:35.168404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.571 ms 00:21:56.303 [2024-11-26 23:06:35.168423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.303 [2024-11-26 23:06:35.168485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.303 [2024-11-26 23:06:35.168512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:56.303 [2024-11-26 23:06:35.168525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:56.303 [2024-11-26 23:06:35.168534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.303 [2024-11-26 23:06:35.168621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.303 [2024-11-26 23:06:35.168632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:56.303 [2024-11-26 23:06:35.168639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:56.303 [2024-11-26 23:06:35.168646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.303 [2024-11-26 23:06:35.168668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.303 [2024-11-26 23:06:35.168676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:56.303 [2024-11-26 23:06:35.168682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:56.303 [2024-11-26 23:06:35.168688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.303 [2024-11-26 23:06:35.168719] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:56.303 [2024-11-26 23:06:35.168728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.303 [2024-11-26 23:06:35.168738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:56.303 [2024-11-26 23:06:35.168745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:56.303 [2024-11-26 23:06:35.168751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.303 [2024-11-26 23:06:35.172411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.303 [2024-11-26 23:06:35.172439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:56.303 [2024-11-26 23:06:35.172453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.643 ms 00:21:56.303 [2024-11-26 23:06:35.172460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.303 [2024-11-26 23:06:35.172518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.303 [2024-11-26 23:06:35.172526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:56.303 [2024-11-26 23:06:35.172535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:56.303 [2024-11-26 23:06:35.172544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.303 [2024-11-26 23:06:35.173765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.192 ms, result 0 00:21:57.245  [2024-11-26T23:06:37.753Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-26T23:06:38.324Z] Copying: 44/1024 [MB] (19 MBps) [2024-11-26T23:06:39.709Z] Copying: 65/1024 [MB] (21 MBps) [2024-11-26T23:06:40.653Z] Copying: 90/1024 [MB] (24 MBps) [2024-11-26T23:06:41.594Z] Copying: 105/1024 [MB] (15 MBps) [2024-11-26T23:06:42.536Z] Copying: 122/1024 [MB] (16 MBps) [2024-11-26T23:06:43.479Z] Copying: 151/1024 [MB] (29 MBps) [2024-11-26T23:06:44.424Z] Copying: 166/1024 [MB] (14 MBps) [2024-11-26T23:06:45.384Z] Copying: 186/1024 [MB] (19 MBps) [2024-11-26T23:06:46.328Z] Copying: 206/1024 [MB] (20 MBps) [2024-11-26T23:06:47.715Z] Copying: 219/1024 [MB] (12 MBps) [2024-11-26T23:06:48.659Z] Copying: 231/1024 [MB] (12 MBps) [2024-11-26T23:06:49.600Z] Copying: 250/1024 [MB] (18 MBps) [2024-11-26T23:06:50.543Z] Copying: 265/1024 [MB] (15 MBps) [2024-11-26T23:06:51.490Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-26T23:06:52.429Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-26T23:06:53.364Z] Copying: 302/1024 [MB] (15 MBps) [2024-11-26T23:06:54.780Z] Copying: 322/1024 [MB] (20 MBps) [2024-11-26T23:06:55.353Z] Copying: 338/1024 [MB] (15 MBps) [2024-11-26T23:06:56.735Z] Copying: 353/1024 [MB] (14 MBps) [2024-11-26T23:06:57.674Z] Copying: 368/1024 [MB] (14 MBps) [2024-11-26T23:06:58.610Z] Copying: 385/1024 [MB] (17 MBps) [2024-11-26T23:06:59.553Z] Copying: 399/1024 [MB] (13 MBps) [2024-11-26T23:07:00.489Z] Copying: 418/1024 [MB] (19 MBps) [2024-11-26T23:07:01.429Z] Copying: 445/1024 [MB] (26 MBps) [2024-11-26T23:07:02.374Z] Copying: 469/1024 [MB] (24 MBps) [2024-11-26T23:07:03.317Z] Copying: 488/1024 [MB] (19 MBps) [2024-11-26T23:07:04.709Z] Copying: 510/1024 [MB] (21 MBps) [2024-11-26T23:07:05.656Z] Copying: 530/1024 [MB] (20 MBps) [2024-11-26T23:07:06.618Z] Copying: 553/1024 [MB] (22 MBps) [2024-11-26T23:07:07.563Z] Copying: 574/1024 [MB] (21 MBps) [2024-11-26T23:07:08.506Z] Copying: 593/1024 [MB] (19 MBps) [2024-11-26T23:07:09.446Z] Copying: 616/1024 [MB] (23 MBps) [2024-11-26T23:07:10.387Z] Copying: 645/1024 [MB] (28 MBps) [2024-11-26T23:07:11.331Z] Copying: 670/1024 [MB] (24 MBps) [2024-11-26T23:07:12.392Z] Copying: 685/1024 [MB] (15 MBps) [2024-11-26T23:07:13.350Z] Copying: 712/1024 [MB] (27 MBps) [2024-11-26T23:07:14.741Z] Copying: 728/1024 [MB] (15 MBps) [2024-11-26T23:07:15.313Z] Copying: 749/1024 [MB] (21 MBps) [2024-11-26T23:07:16.702Z] Copying: 766/1024 [MB] (16 MBps) [2024-11-26T23:07:17.652Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-26T23:07:18.595Z] Copying: 787/1024 [MB] (11 MBps) [2024-11-26T23:07:19.542Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-26T23:07:20.488Z] Copying: 811/1024 [MB] (12 MBps) [2024-11-26T23:07:21.436Z] Copying: 831/1024 [MB] (19 MBps) [2024-11-26T23:07:22.383Z] Copying: 844/1024 [MB] (12 MBps) [2024-11-26T23:07:23.329Z] Copying: 854/1024 [MB] (10 MBps) [2024-11-26T23:07:24.718Z] Copying: 871/1024 [MB] (16 MBps) [2024-11-26T23:07:25.668Z] Copying: 883/1024 [MB] (12 MBps) [2024-11-26T23:07:26.631Z] Copying: 895/1024 [MB] (12 MBps) [2024-11-26T23:07:27.577Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-26T23:07:28.526Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-26T23:07:29.467Z] Copying: 929/1024 [MB] (12 MBps) [2024-11-26T23:07:30.410Z] Copying: 939/1024 [MB] (10 MBps) [2024-11-26T23:07:31.355Z] Copying: 961/1024 [MB] (21 MBps) [2024-11-26T23:07:32.744Z] Copying: 982/1024 [MB] (21 MBps) [2024-11-26T23:07:33.318Z] Copying: 1003/1024 [MB] (21 MBps) [2024-11-26T23:07:33.580Z] Copying: 1021/1024 [MB] (17 MBps) [2024-11-26T23:07:33.845Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-26 23:07:33.599950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.600563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:54.718 [2024-11-26 23:07:33.600642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:54.718 [2024-11-26 23:07:33.600671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.600750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:54.718 [2024-11-26 23:07:33.602055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.602140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:54.718 [2024-11-26 23:07:33.602172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:22:54.718 [2024-11-26 23:07:33.602273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.603587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.603609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:54.718 [2024-11-26 23:07:33.603632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:22:54.718 [2024-11-26 23:07:33.603642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.608198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.608337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:54.718 [2024-11-26 23:07:33.608428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:22:54.718 [2024-11-26 23:07:33.608458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.615789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.615944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:54.718 [2024-11-26 23:07:33.616012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.235 ms 00:22:54.718 [2024-11-26 23:07:33.616046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.619079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.619287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:54.718 [2024-11-26 23:07:33.619387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:22:54.718 [2024-11-26 23:07:33.619415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.624536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.624703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:54.718 [2024-11-26 23:07:33.624765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.028 ms 00:22:54.718 [2024-11-26 23:07:33.624788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.624929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.624957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:54.718 [2024-11-26 23:07:33.624979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:22:54.718 [2024-11-26 23:07:33.625041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.627916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.628078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:54.718 [2024-11-26 23:07:33.628136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:22:54.718 [2024-11-26 23:07:33.628159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.630508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.630651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:54.718 [2024-11-26 23:07:33.630705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:22:54.718 [2024-11-26 23:07:33.631085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.633127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.633254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:54.718 [2024-11-26 23:07:33.633321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:22:54.718 [2024-11-26 23:07:33.633346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.635560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.718 [2024-11-26 23:07:33.635710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.718 [2024-11-26 23:07:33.635767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:22:54.718 [2024-11-26 23:07:33.635791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.718 [2024-11-26 23:07:33.635830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.718 [2024-11-26 23:07:33.635860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.635893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.635921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.635999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.718 [2024-11-26 23:07:33.636825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.636996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.719 [2024-11-26 23:07:33.637470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.720 [2024-11-26 23:07:33.637560] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.720 [2024-11-26 23:07:33.637570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 30f64c7f-41c7-49e6-993d-bc2c7c41b945 00:22:54.720 [2024-11-26 23:07:33.637578] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:54.720 [2024-11-26 23:07:33.637586] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:54.720 [2024-11-26 23:07:33.637594] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:54.720 [2024-11-26 23:07:33.637603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:54.720 [2024-11-26 23:07:33.637612] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.720 [2024-11-26 23:07:33.637621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.720 [2024-11-26 23:07:33.637640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.720 [2024-11-26 23:07:33.637647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.720 [2024-11-26 23:07:33.637654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.720 [2024-11-26 23:07:33.637663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.720 [2024-11-26 23:07:33.637686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.720 [2024-11-26 23:07:33.637695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:22:54.720 [2024-11-26 23:07:33.637704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.640941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.720 [2024-11-26 23:07:33.640976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.720 [2024-11-26 23:07:33.640988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:22:54.720 [2024-11-26 23:07:33.641002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.641180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.720 [2024-11-26 23:07:33.641191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.720 [2024-11-26 23:07:33.641200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:22:54.720 [2024-11-26 23:07:33.641209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.651518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.651680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.720 [2024-11-26 23:07:33.651698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.651719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.651785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.651799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.720 [2024-11-26 23:07:33.651808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.651817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.651887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.651898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.720 [2024-11-26 23:07:33.651907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.651916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.651938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.651949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.720 [2024-11-26 23:07:33.651957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.651967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.671261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.671482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.720 [2024-11-26 23:07:33.671501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.671519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.686447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.686624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.720 [2024-11-26 23:07:33.686642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.686653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.686727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.686738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.720 [2024-11-26 23:07:33.686748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.686757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.686797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.686812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.720 [2024-11-26 23:07:33.686821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.686830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.686920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.686931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.720 [2024-11-26 23:07:33.686941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.686950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.686985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.686999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.720 [2024-11-26 23:07:33.687009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.687018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.687068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.687080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.720 [2024-11-26 23:07:33.687089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.687097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.687157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.720 [2024-11-26 23:07:33.687173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.720 [2024-11-26 23:07:33.687184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.720 [2024-11-26 23:07:33.687194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.720 [2024-11-26 23:07:33.687398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.417 ms, result 0 00:22:54.984 00:22:54.984 00:22:54.984 23:07:33 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:57.533 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:57.533 23:07:36 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:57.533 [2024-11-26 23:07:36.377657] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:22:57.533 [2024-11-26 23:07:36.377804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91597 ] 00:22:57.533 [2024-11-26 23:07:36.515543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:57.533 [2024-11-26 23:07:36.543930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.533 [2024-11-26 23:07:36.584596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.795 [2024-11-26 23:07:36.736831] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:57.795 [2024-11-26 23:07:36.736933] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:57.795 [2024-11-26 23:07:36.901819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.795 [2024-11-26 23:07:36.901883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:57.795 [2024-11-26 23:07:36.901901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:57.795 [2024-11-26 23:07:36.901910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.795 [2024-11-26 23:07:36.901986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.795 [2024-11-26 23:07:36.901998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:57.795 [2024-11-26 23:07:36.902008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:57.795 [2024-11-26 23:07:36.902021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.795 [2024-11-26 23:07:36.902043] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:57.795 [2024-11-26 23:07:36.902358] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:57.795 [2024-11-26 23:07:36.902384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.795 [2024-11-26 23:07:36.902396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:57.795 [2024-11-26 23:07:36.902406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:22:57.795 [2024-11-26 23:07:36.902415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.795 [2024-11-26 23:07:36.904876] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:57.795 [2024-11-26 23:07:36.909528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.795 [2024-11-26 23:07:36.909725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:57.795 [2024-11-26 23:07:36.909760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:22:57.795 [2024-11-26 23:07:36.909775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.795 [2024-11-26 23:07:36.909889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.795 [2024-11-26 23:07:36.909901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:57.795 [2024-11-26 23:07:36.909911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:57.795 [2024-11-26 23:07:36.909920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.062 [2024-11-26 23:07:36.921494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.062 [2024-11-26 23:07:36.921545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.062 [2024-11-26 23:07:36.921559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.528 ms 00:22:58.062 [2024-11-26 23:07:36.921569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.062 [2024-11-26 23:07:36.921682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.063 [2024-11-26 23:07:36.921693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.063 [2024-11-26 23:07:36.921706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:58.063 [2024-11-26 23:07:36.921714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.063 [2024-11-26 23:07:36.921784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.063 [2024-11-26 23:07:36.921800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:58.063 [2024-11-26 23:07:36.921812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:58.063 [2024-11-26 23:07:36.921820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.063 [2024-11-26 23:07:36.921851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:58.063 [2024-11-26 23:07:36.924586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.063 [2024-11-26 23:07:36.924758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.063 [2024-11-26 23:07:36.924774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:22:58.063 [2024-11-26 23:07:36.924784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.063 [2024-11-26 23:07:36.924825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.063 [2024-11-26 23:07:36.924835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:58.063 [2024-11-26 23:07:36.924851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:58.063 [2024-11-26 23:07:36.924863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.063 [2024-11-26 23:07:36.924887] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:58.063 [2024-11-26 23:07:36.924917] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:58.063 [2024-11-26 23:07:36.924961] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:58.063 [2024-11-26 23:07:36.924985] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:58.063 [2024-11-26 23:07:36.925103] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:58.063 [2024-11-26 23:07:36.925119] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:58.063 [2024-11-26 23:07:36.925131] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:58.063 [2024-11-26 23:07:36.925142] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925151] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925159] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:58.063 [2024-11-26 23:07:36.925167] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:58.063 [2024-11-26 23:07:36.925175] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:58.063 [2024-11-26 23:07:36.925183] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:58.063 [2024-11-26 23:07:36.925193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.063 [2024-11-26 23:07:36.925202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:58.063 [2024-11-26 23:07:36.925214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:22:58.063 [2024-11-26 23:07:36.925224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.063 [2024-11-26 23:07:36.925325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.063 [2024-11-26 23:07:36.925341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:58.063 [2024-11-26 23:07:36.925350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:58.063 [2024-11-26 23:07:36.925360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.063 [2024-11-26 23:07:36.925461] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:58.063 [2024-11-26 23:07:36.925474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:58.063 [2024-11-26 23:07:36.925485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:58.063 [2024-11-26 23:07:36.925520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:58.063 [2024-11-26 23:07:36.925556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.063 [2024-11-26 23:07:36.925582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:58.063 [2024-11-26 23:07:36.925592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:58.063 [2024-11-26 23:07:36.925600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.063 [2024-11-26 23:07:36.925608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:58.063 [2024-11-26 23:07:36.925617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:58.063 [2024-11-26 23:07:36.925625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:58.063 [2024-11-26 23:07:36.925641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:58.063 [2024-11-26 23:07:36.925667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:58.063 [2024-11-26 23:07:36.925692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:58.063 [2024-11-26 23:07:36.925720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:58.063 [2024-11-26 23:07:36.925740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:58.063 [2024-11-26 23:07:36.925760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.063 [2024-11-26 23:07:36.925774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:58.063 [2024-11-26 23:07:36.925781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:58.063 [2024-11-26 23:07:36.925787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.063 [2024-11-26 23:07:36.925794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:58.063 [2024-11-26 23:07:36.925800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:58.063 [2024-11-26 23:07:36.925806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:58.063 [2024-11-26 23:07:36.925822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:58.063 [2024-11-26 23:07:36.925831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:58.063 [2024-11-26 23:07:36.925848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:58.063 [2024-11-26 23:07:36.925856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.063 [2024-11-26 23:07:36.925872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:58.063 [2024-11-26 23:07:36.925879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:58.063 [2024-11-26 23:07:36.925886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:58.063 [2024-11-26 23:07:36.925892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:58.063 [2024-11-26 23:07:36.925899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:58.063 [2024-11-26 23:07:36.925906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:58.063 [2024-11-26 23:07:36.925916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:58.063 [2024-11-26 23:07:36.925925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.063 [2024-11-26 23:07:36.925935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:58.063 [2024-11-26 23:07:36.925943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:58.063 [2024-11-26 23:07:36.925953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:58.063 [2024-11-26 23:07:36.925961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:58.063 [2024-11-26 23:07:36.925969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:58.063 [2024-11-26 23:07:36.925977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:58.063 [2024-11-26 23:07:36.925984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:58.063 [2024-11-26 23:07:36.925991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:58.064 [2024-11-26 23:07:36.925998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:58.064 [2024-11-26 23:07:36.926006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:58.064 [2024-11-26 23:07:36.926013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:58.064 [2024-11-26 23:07:36.926020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:58.064 [2024-11-26 23:07:36.926028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:58.064 [2024-11-26 23:07:36.926035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:58.064 [2024-11-26 23:07:36.926042] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:58.064 [2024-11-26 23:07:36.926051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.064 [2024-11-26 23:07:36.926060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:58.064 [2024-11-26 23:07:36.926068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:58.064 [2024-11-26 23:07:36.926077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:58.064 [2024-11-26 23:07:36.926086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:58.064 [2024-11-26 23:07:36.926094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.926107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:58.064 [2024-11-26 23:07:36.926115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:22:58.064 [2024-11-26 23:07:36.926126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.946735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.946902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.064 [2024-11-26 23:07:36.946963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.527 ms 00:22:58.064 [2024-11-26 23:07:36.946988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.947100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.947134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.064 [2024-11-26 23:07:36.947161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:58.064 [2024-11-26 23:07:36.947181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.973763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.974006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.064 [2024-11-26 23:07:36.974110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.501 ms 00:22:58.064 [2024-11-26 23:07:36.974160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.974270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.974332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.064 [2024-11-26 23:07:36.974370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.064 [2024-11-26 23:07:36.974403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.975202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.975406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.064 [2024-11-26 23:07:36.975488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:22:58.064 [2024-11-26 23:07:36.975522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.975758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.975793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.064 [2024-11-26 23:07:36.975886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:22:58.064 [2024-11-26 23:07:36.975918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.987533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.987684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.064 [2024-11-26 23:07:36.987743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.555 ms 00:22:58.064 [2024-11-26 23:07:36.987780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:36.992676] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:58.064 [2024-11-26 23:07:36.992854] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:58.064 [2024-11-26 23:07:36.992930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:36.992953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:58.064 [2024-11-26 23:07:36.992974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.024 ms 00:22:58.064 [2024-11-26 23:07:36.992994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.009727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.009921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:58.064 [2024-11-26 23:07:37.009992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.291 ms 00:22:58.064 [2024-11-26 23:07:37.010018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.013124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.013285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:58.064 [2024-11-26 23:07:37.013358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:22:58.064 [2024-11-26 23:07:37.013381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.015965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.016117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:58.064 [2024-11-26 23:07:37.016174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:22:58.064 [2024-11-26 23:07:37.016229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.016703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.016787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.064 [2024-11-26 23:07:37.016882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:22:58.064 [2024-11-26 23:07:37.016954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.046383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.046610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:58.064 [2024-11-26 23:07:37.046671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.387 ms 00:22:58.064 [2024-11-26 23:07:37.046695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.055432] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.064 [2024-11-26 23:07:37.059417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.059565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.064 [2024-11-26 23:07:37.059628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.582 ms 00:22:58.064 [2024-11-26 23:07:37.059676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.059787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.059860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:58.064 [2024-11-26 23:07:37.059922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:58.064 [2024-11-26 23:07:37.059945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.060077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.060116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.064 [2024-11-26 23:07:37.060138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:58.064 [2024-11-26 23:07:37.060157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.060198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.060364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.064 [2024-11-26 23:07:37.060399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:58.064 [2024-11-26 23:07:37.060419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.060480] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:58.064 [2024-11-26 23:07:37.060517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.060544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:58.064 [2024-11-26 23:07:37.060606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:58.064 [2024-11-26 23:07:37.060636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.065957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.066098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.064 [2024-11-26 23:07:37.066158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.283 ms 00:22:58.064 [2024-11-26 23:07:37.066204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.066320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.064 [2024-11-26 23:07:37.066516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.064 [2024-11-26 23:07:37.066545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:58.064 [2024-11-26 23:07:37.066554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.064 [2024-11-26 23:07:37.067970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 165.526 ms, result 0 00:22:59.079  [2024-11-26T23:07:39.144Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-26T23:07:40.099Z] Copying: 36/1024 [MB] (26 MBps) [2024-11-26T23:07:41.487Z] Copying: 50/1024 [MB] (13 MBps) [2024-11-26T23:07:42.434Z] Copying: 60/1024 [MB] (10 MBps) [2024-11-26T23:07:43.380Z] Copying: 74/1024 [MB] (14 MBps) [2024-11-26T23:07:44.323Z] Copying: 88/1024 [MB] (13 MBps) [2024-11-26T23:07:45.267Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-26T23:07:46.212Z] Copying: 121/1024 [MB] (19 MBps) [2024-11-26T23:07:47.147Z] Copying: 133/1024 [MB] (11 MBps) [2024-11-26T23:07:48.084Z] Copying: 176/1024 [MB] (42 MBps) [2024-11-26T23:07:49.467Z] Copying: 201/1024 [MB] (25 MBps) [2024-11-26T23:07:50.408Z] Copying: 217/1024 [MB] (16 MBps) [2024-11-26T23:07:51.350Z] Copying: 237/1024 [MB] (19 MBps) [2024-11-26T23:07:52.295Z] Copying: 255/1024 [MB] (17 MBps) [2024-11-26T23:07:53.240Z] Copying: 271/1024 [MB] (15 MBps) [2024-11-26T23:07:54.181Z] Copying: 292/1024 [MB] (20 MBps) [2024-11-26T23:07:55.139Z] Copying: 319/1024 [MB] (27 MBps) [2024-11-26T23:07:56.083Z] Copying: 355/1024 [MB] (36 MBps) [2024-11-26T23:07:57.467Z] Copying: 375/1024 [MB] (20 MBps) [2024-11-26T23:07:58.405Z] Copying: 397/1024 [MB] (21 MBps) [2024-11-26T23:07:59.345Z] Copying: 419/1024 [MB] (21 MBps) [2024-11-26T23:08:00.286Z] Copying: 441/1024 [MB] (22 MBps) [2024-11-26T23:08:01.224Z] Copying: 465/1024 [MB] (24 MBps) [2024-11-26T23:08:02.165Z] Copying: 486/1024 [MB] (20 MBps) [2024-11-26T23:08:03.105Z] Copying: 507/1024 [MB] (21 MBps) [2024-11-26T23:08:04.491Z] Copying: 538/1024 [MB] (30 MBps) [2024-11-26T23:08:05.433Z] Copying: 558/1024 [MB] (20 MBps) [2024-11-26T23:08:06.377Z] Copying: 575/1024 [MB] (16 MBps) [2024-11-26T23:08:07.320Z] Copying: 595/1024 [MB] (20 MBps) [2024-11-26T23:08:08.264Z] Copying: 616/1024 [MB] (20 MBps) [2024-11-26T23:08:09.214Z] Copying: 634/1024 [MB] (17 MBps) [2024-11-26T23:08:10.151Z] Copying: 655/1024 [MB] (20 MBps) [2024-11-26T23:08:11.091Z] Copying: 678/1024 [MB] (23 MBps) [2024-11-26T23:08:12.514Z] Copying: 699/1024 [MB] (21 MBps) [2024-11-26T23:08:13.088Z] Copying: 717/1024 [MB] (17 MBps) [2024-11-26T23:08:14.475Z] Copying: 730/1024 [MB] (12 MBps) [2024-11-26T23:08:15.421Z] Copying: 745/1024 [MB] (15 MBps) [2024-11-26T23:08:16.365Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-26T23:08:17.306Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-26T23:08:18.240Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-26T23:08:19.173Z] Copying: 811/1024 [MB] (31 MBps) [2024-11-26T23:08:20.105Z] Copying: 845/1024 [MB] (34 MBps) [2024-11-26T23:08:21.486Z] Copying: 877/1024 [MB] (31 MBps) [2024-11-26T23:08:22.429Z] Copying: 908/1024 [MB] (30 MBps) [2024-11-26T23:08:23.389Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-26T23:08:24.330Z] Copying: 930/1024 [MB] (11 MBps) [2024-11-26T23:08:25.325Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-26T23:08:26.267Z] Copying: 972/1024 [MB] (30 MBps) [2024-11-26T23:08:27.213Z] Copying: 987/1024 [MB] (15 MBps) [2024-11-26T23:08:28.154Z] Copying: 998/1024 [MB] (11 MBps) [2024-11-26T23:08:29.095Z] Copying: 1012/1024 [MB] (13 MBps) [2024-11-26T23:08:30.055Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-26T23:08:30.055Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-26 23:08:30.018333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.928 [2024-11-26 23:08:30.018425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:50.928 [2024-11-26 23:08:30.018446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:50.928 [2024-11-26 23:08:30.018457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.928 [2024-11-26 23:08:30.019959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:50.928 [2024-11-26 23:08:30.021255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.928 [2024-11-26 23:08:30.021321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:50.928 [2024-11-26 23:08:30.021336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:23:50.928 [2024-11-26 23:08:30.021346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.928 [2024-11-26 23:08:30.034125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.928 [2024-11-26 23:08:30.034193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:50.928 [2024-11-26 23:08:30.034206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.345 ms 00:23:50.928 [2024-11-26 23:08:30.034215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.188 [2024-11-26 23:08:30.059018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.188 [2024-11-26 23:08:30.059063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:51.188 [2024-11-26 23:08:30.059077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.784 ms 00:23:51.188 [2024-11-26 23:08:30.059094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.188 [2024-11-26 23:08:30.065288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.188 [2024-11-26 23:08:30.065335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:51.188 [2024-11-26 23:08:30.065359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:23:51.188 [2024-11-26 23:08:30.065368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.188 [2024-11-26 23:08:30.068357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.188 [2024-11-26 23:08:30.068402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:51.188 [2024-11-26 23:08:30.068414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:23:51.188 [2024-11-26 23:08:30.068422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.189 [2024-11-26 23:08:30.073552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.189 [2024-11-26 23:08:30.073599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:51.189 [2024-11-26 23:08:30.073620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.088 ms 00:23:51.189 [2024-11-26 23:08:30.073629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.189 [2024-11-26 23:08:30.221493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.189 [2024-11-26 23:08:30.221558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:51.189 [2024-11-26 23:08:30.221572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 147.819 ms 00:23:51.189 [2024-11-26 23:08:30.221582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.189 [2024-11-26 23:08:30.224086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.189 [2024-11-26 23:08:30.224182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:51.189 [2024-11-26 23:08:30.224193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:23:51.189 [2024-11-26 23:08:30.224201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.189 [2024-11-26 23:08:30.226357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.189 [2024-11-26 23:08:30.226400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:51.189 [2024-11-26 23:08:30.226413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:23:51.189 [2024-11-26 23:08:30.226422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.189 [2024-11-26 23:08:30.228018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.189 [2024-11-26 23:08:30.228063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:51.189 [2024-11-26 23:08:30.228073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.557 ms 00:23:51.189 [2024-11-26 23:08:30.228080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.189 [2024-11-26 23:08:30.229633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.189 [2024-11-26 23:08:30.229819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:51.189 [2024-11-26 23:08:30.229838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:23:51.189 [2024-11-26 23:08:30.229845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.189 [2024-11-26 23:08:30.229879] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:51.189 [2024-11-26 23:08:30.229897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106240 / 261120 wr_cnt: 1 state: open 00:23:51.189 [2024-11-26 23:08:30.229909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.229997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:51.189 [2024-11-26 23:08:30.230245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:51.190 [2024-11-26 23:08:30.230815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:51.190 [2024-11-26 23:08:30.230824] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 30f64c7f-41c7-49e6-993d-bc2c7c41b945 00:23:51.190 [2024-11-26 23:08:30.230837] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106240 00:23:51.190 [2024-11-26 23:08:30.230850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107200 00:23:51.190 [2024-11-26 23:08:30.230858] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106240 00:23:51.190 [2024-11-26 23:08:30.230873] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:23:51.190 [2024-11-26 23:08:30.230886] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:51.190 [2024-11-26 23:08:30.230894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:51.190 [2024-11-26 23:08:30.230911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:51.191 [2024-11-26 23:08:30.230918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:51.191 [2024-11-26 23:08:30.230925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:51.191 [2024-11-26 23:08:30.230933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.191 [2024-11-26 23:08:30.230947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:51.191 [2024-11-26 23:08:30.230957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:23:51.191 [2024-11-26 23:08:30.230965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.234082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.191 [2024-11-26 23:08:30.234116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.191 [2024-11-26 23:08:30.234127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:23:51.191 [2024-11-26 23:08:30.234136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.234286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.191 [2024-11-26 23:08:30.234326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.191 [2024-11-26 23:08:30.234342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:23:51.191 [2024-11-26 23:08:30.234355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.244245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.244358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.191 [2024-11-26 23:08:30.244373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.244382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.244443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.244453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.191 [2024-11-26 23:08:30.244464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.244501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.244571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.244582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.191 [2024-11-26 23:08:30.244592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.244601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.244617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.244627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.191 [2024-11-26 23:08:30.244636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.244644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.263624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.263870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.191 [2024-11-26 23:08:30.263893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.263903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.279119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.279359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.191 [2024-11-26 23:08:30.279380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.279408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.279484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.279495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.191 [2024-11-26 23:08:30.279505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.279519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.279561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.279571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.191 [2024-11-26 23:08:30.279582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.279591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.279689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.279701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.191 [2024-11-26 23:08:30.279711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.279720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.279760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.279771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.191 [2024-11-26 23:08:30.279781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.279789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.279847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.279859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.191 [2024-11-26 23:08:30.279874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.279884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.279950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.191 [2024-11-26 23:08:30.279963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.191 [2024-11-26 23:08:30.279973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.191 [2024-11-26 23:08:30.279983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.191 [2024-11-26 23:08:30.280162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.414 ms, result 0 00:23:52.574 00:23:52.574 00:23:52.574 23:08:31 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:52.835 [2024-11-26 23:08:31.725860] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:23:52.835 [2024-11-26 23:08:31.726047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92164 ] 00:23:52.835 [2024-11-26 23:08:31.865902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:52.835 [2024-11-26 23:08:31.897236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.835 [2024-11-26 23:08:31.937153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.096 [2024-11-26 23:08:32.086462] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:53.096 [2024-11-26 23:08:32.086846] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:53.358 [2024-11-26 23:08:32.249418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.249478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:53.358 [2024-11-26 23:08:32.249499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:53.358 [2024-11-26 23:08:32.249509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.249574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.249591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.358 [2024-11-26 23:08:32.249600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:53.358 [2024-11-26 23:08:32.249613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.249635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:53.358 [2024-11-26 23:08:32.249911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:53.358 [2024-11-26 23:08:32.249927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.249940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.358 [2024-11-26 23:08:32.249950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:23:53.358 [2024-11-26 23:08:32.249958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.252195] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:53.358 [2024-11-26 23:08:32.256970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.257019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:53.358 [2024-11-26 23:08:32.257042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:23:53.358 [2024-11-26 23:08:32.257051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.257131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.257142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:53.358 [2024-11-26 23:08:32.257152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:53.358 [2024-11-26 23:08:32.257159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.268503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.268551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.358 [2024-11-26 23:08:32.268564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.298 ms 00:23:53.358 [2024-11-26 23:08:32.268583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.268693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.268704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.358 [2024-11-26 23:08:32.268713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:53.358 [2024-11-26 23:08:32.268722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.268784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.268795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:53.358 [2024-11-26 23:08:32.268813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:53.358 [2024-11-26 23:08:32.268823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.268847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:53.358 [2024-11-26 23:08:32.271523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.271698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.358 [2024-11-26 23:08:32.271716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:23:53.358 [2024-11-26 23:08:32.271724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.271769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.358 [2024-11-26 23:08:32.271779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:53.358 [2024-11-26 23:08:32.271795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:53.358 [2024-11-26 23:08:32.271803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.358 [2024-11-26 23:08:32.271827] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:53.358 [2024-11-26 23:08:32.271854] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:53.358 [2024-11-26 23:08:32.271895] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:53.358 [2024-11-26 23:08:32.271912] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:53.358 [2024-11-26 23:08:32.272031] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:53.358 [2024-11-26 23:08:32.272045] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:53.358 [2024-11-26 23:08:32.272057] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:53.358 [2024-11-26 23:08:32.272067] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:53.358 [2024-11-26 23:08:32.272077] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:53.358 [2024-11-26 23:08:32.272086] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:53.358 [2024-11-26 23:08:32.272093] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:53.358 [2024-11-26 23:08:32.272101] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:53.358 [2024-11-26 23:08:32.272109] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:53.359 [2024-11-26 23:08:32.272117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.359 [2024-11-26 23:08:32.272125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:53.359 [2024-11-26 23:08:32.272134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:53.359 [2024-11-26 23:08:32.272150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.359 [2024-11-26 23:08:32.272234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.359 [2024-11-26 23:08:32.272244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:53.359 [2024-11-26 23:08:32.272253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:53.359 [2024-11-26 23:08:32.272269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.359 [2024-11-26 23:08:32.272400] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:53.359 [2024-11-26 23:08:32.272413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:53.359 [2024-11-26 23:08:32.272423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:53.359 [2024-11-26 23:08:32.272457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:53.359 [2024-11-26 23:08:32.272494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.359 [2024-11-26 23:08:32.272511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:53.359 [2024-11-26 23:08:32.272519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:53.359 [2024-11-26 23:08:32.272527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.359 [2024-11-26 23:08:32.272536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:53.359 [2024-11-26 23:08:32.272544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:53.359 [2024-11-26 23:08:32.272555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:53.359 [2024-11-26 23:08:32.272571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:53.359 [2024-11-26 23:08:32.272594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:53.359 [2024-11-26 23:08:32.272618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:53.359 [2024-11-26 23:08:32.272638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:53.359 [2024-11-26 23:08:32.272659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:53.359 [2024-11-26 23:08:32.272691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.359 [2024-11-26 23:08:32.272704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:53.359 [2024-11-26 23:08:32.272711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:53.359 [2024-11-26 23:08:32.272717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.359 [2024-11-26 23:08:32.272726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:53.359 [2024-11-26 23:08:32.272733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:53.359 [2024-11-26 23:08:32.272740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:53.359 [2024-11-26 23:08:32.272754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:53.359 [2024-11-26 23:08:32.272760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272768] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:53.359 [2024-11-26 23:08:32.272777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:53.359 [2024-11-26 23:08:32.272784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.359 [2024-11-26 23:08:32.272802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:53.359 [2024-11-26 23:08:32.272809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:53.359 [2024-11-26 23:08:32.272816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:53.359 [2024-11-26 23:08:32.272825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:53.359 [2024-11-26 23:08:32.272831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:53.359 [2024-11-26 23:08:32.272838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:53.359 [2024-11-26 23:08:32.272847] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:53.359 [2024-11-26 23:08:32.272868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.359 [2024-11-26 23:08:32.272877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:53.359 [2024-11-26 23:08:32.272885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:53.359 [2024-11-26 23:08:32.272892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:53.359 [2024-11-26 23:08:32.272899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:53.359 [2024-11-26 23:08:32.272906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:53.359 [2024-11-26 23:08:32.272913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:53.359 [2024-11-26 23:08:32.272921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:53.359 [2024-11-26 23:08:32.272928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:53.359 [2024-11-26 23:08:32.272937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:53.359 [2024-11-26 23:08:32.272945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:53.359 [2024-11-26 23:08:32.272952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:53.359 [2024-11-26 23:08:32.272959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:53.359 [2024-11-26 23:08:32.272966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:53.359 [2024-11-26 23:08:32.272974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:53.359 [2024-11-26 23:08:32.272983] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:53.359 [2024-11-26 23:08:32.272992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.359 [2024-11-26 23:08:32.273006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:53.359 [2024-11-26 23:08:32.273015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:53.359 [2024-11-26 23:08:32.273023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:53.359 [2024-11-26 23:08:32.273031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:53.359 [2024-11-26 23:08:32.273039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.359 [2024-11-26 23:08:32.273047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:53.359 [2024-11-26 23:08:32.273054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:23:53.359 [2024-11-26 23:08:32.273065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.359 [2024-11-26 23:08:32.292983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.359 [2024-11-26 23:08:32.293032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.359 [2024-11-26 23:08:32.293045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.850 ms 00:23:53.359 [2024-11-26 23:08:32.293054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.359 [2024-11-26 23:08:32.293150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.359 [2024-11-26 23:08:32.293166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:53.359 [2024-11-26 23:08:32.293174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:53.359 [2024-11-26 23:08:32.293189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.317602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.317674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.360 [2024-11-26 23:08:32.317693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.347 ms 00:23:53.360 [2024-11-26 23:08:32.317706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.317781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.317797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.360 [2024-11-26 23:08:32.317810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:53.360 [2024-11-26 23:08:32.317826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.318653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.318702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.360 [2024-11-26 23:08:32.318727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:23:53.360 [2024-11-26 23:08:32.318741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.318975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.318992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.360 [2024-11-26 23:08:32.319013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:23:53.360 [2024-11-26 23:08:32.319025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.330114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.330158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.360 [2024-11-26 23:08:32.330171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.051 ms 00:23:53.360 [2024-11-26 23:08:32.330191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.334867] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:53.360 [2024-11-26 23:08:32.334918] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:53.360 [2024-11-26 23:08:32.334940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.334950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:53.360 [2024-11-26 23:08:32.334966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.607 ms 00:23:53.360 [2024-11-26 23:08:32.334974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.350990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.351166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:53.360 [2024-11-26 23:08:32.351187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.963 ms 00:23:53.360 [2024-11-26 23:08:32.351197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.354078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.354114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:53.360 [2024-11-26 23:08:32.354125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:23:53.360 [2024-11-26 23:08:32.354133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.356788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.356934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:53.360 [2024-11-26 23:08:32.356992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:23:53.360 [2024-11-26 23:08:32.357015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.358521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.358905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:53.360 [2024-11-26 23:08:32.359120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:23:53.360 [2024-11-26 23:08:32.359203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.392265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.392526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:53.360 [2024-11-26 23:08:32.392593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.788 ms 00:23:53.360 [2024-11-26 23:08:32.392632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.401501] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:53.360 [2024-11-26 23:08:32.405551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.405702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:53.360 [2024-11-26 23:08:32.405761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.857 ms 00:23:53.360 [2024-11-26 23:08:32.405784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.405908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.405944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:53.360 [2024-11-26 23:08:32.405968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:53.360 [2024-11-26 23:08:32.405988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.408312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.408482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:53.360 [2024-11-26 23:08:32.408543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.217 ms 00:23:53.360 [2024-11-26 23:08:32.408569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.408632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.408658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:53.360 [2024-11-26 23:08:32.408689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:53.360 [2024-11-26 23:08:32.408709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.408768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:53.360 [2024-11-26 23:08:32.408873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.408897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:53.360 [2024-11-26 23:08:32.408918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:53.360 [2024-11-26 23:08:32.408938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.415938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.416104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:53.360 [2024-11-26 23:08:32.416162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:23:53.360 [2024-11-26 23:08:32.416186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.416716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.360 [2024-11-26 23:08:32.416913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:53.360 [2024-11-26 23:08:32.417168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:23:53.360 [2024-11-26 23:08:32.417248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.360 [2024-11-26 23:08:32.431277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 176.736 ms, result 0 00:23:54.762  [2024-11-26T23:08:34.831Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-26T23:08:35.776Z] Copying: 32/1024 [MB] (17 MBps) [2024-11-26T23:08:36.721Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-26T23:08:37.665Z] Copying: 54/1024 [MB] (10 MBps) [2024-11-26T23:08:39.063Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-26T23:08:39.633Z] Copying: 75/1024 [MB] (10 MBps) [2024-11-26T23:08:41.017Z] Copying: 86/1024 [MB] (10 MBps) [2024-11-26T23:08:41.960Z] Copying: 109/1024 [MB] (23 MBps) [2024-11-26T23:08:42.936Z] Copying: 120/1024 [MB] (10 MBps) [2024-11-26T23:08:43.879Z] Copying: 130/1024 [MB] (10 MBps) [2024-11-26T23:08:44.820Z] Copying: 145/1024 [MB] (14 MBps) [2024-11-26T23:08:45.765Z] Copying: 173/1024 [MB] (28 MBps) [2024-11-26T23:08:46.736Z] Copying: 188/1024 [MB] (14 MBps) [2024-11-26T23:08:47.682Z] Copying: 209/1024 [MB] (21 MBps) [2024-11-26T23:08:48.625Z] Copying: 219/1024 [MB] (10 MBps) [2024-11-26T23:08:50.009Z] Copying: 236/1024 [MB] (17 MBps) [2024-11-26T23:08:50.949Z] Copying: 258/1024 [MB] (21 MBps) [2024-11-26T23:08:51.890Z] Copying: 271/1024 [MB] (13 MBps) [2024-11-26T23:08:52.834Z] Copying: 283/1024 [MB] (11 MBps) [2024-11-26T23:08:53.777Z] Copying: 298/1024 [MB] (15 MBps) [2024-11-26T23:08:54.718Z] Copying: 312/1024 [MB] (14 MBps) [2024-11-26T23:08:55.681Z] Copying: 333/1024 [MB] (21 MBps) [2024-11-26T23:08:57.069Z] Copying: 351/1024 [MB] (17 MBps) [2024-11-26T23:08:57.641Z] Copying: 363/1024 [MB] (12 MBps) [2024-11-26T23:08:59.025Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-26T23:08:59.966Z] Copying: 387/1024 [MB] (12 MBps) [2024-11-26T23:09:00.908Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-26T23:09:01.854Z] Copying: 412/1024 [MB] (15 MBps) [2024-11-26T23:09:02.795Z] Copying: 424/1024 [MB] (11 MBps) [2024-11-26T23:09:03.735Z] Copying: 446/1024 [MB] (22 MBps) [2024-11-26T23:09:04.687Z] Copying: 466/1024 [MB] (20 MBps) [2024-11-26T23:09:06.066Z] Copying: 480/1024 [MB] (13 MBps) [2024-11-26T23:09:06.634Z] Copying: 495/1024 [MB] (14 MBps) [2024-11-26T23:09:08.028Z] Copying: 521/1024 [MB] (26 MBps) [2024-11-26T23:09:08.969Z] Copying: 543/1024 [MB] (22 MBps) [2024-11-26T23:09:09.911Z] Copying: 565/1024 [MB] (22 MBps) [2024-11-26T23:09:10.855Z] Copying: 576/1024 [MB] (10 MBps) [2024-11-26T23:09:11.799Z] Copying: 594/1024 [MB] (18 MBps) [2024-11-26T23:09:12.745Z] Copying: 613/1024 [MB] (18 MBps) [2024-11-26T23:09:13.688Z] Copying: 632/1024 [MB] (19 MBps) [2024-11-26T23:09:14.633Z] Copying: 645/1024 [MB] (13 MBps) [2024-11-26T23:09:15.715Z] Copying: 663/1024 [MB] (17 MBps) [2024-11-26T23:09:16.658Z] Copying: 676/1024 [MB] (13 MBps) [2024-11-26T23:09:18.047Z] Copying: 697/1024 [MB] (20 MBps) [2024-11-26T23:09:18.988Z] Copying: 707/1024 [MB] (10 MBps) [2024-11-26T23:09:19.931Z] Copying: 718/1024 [MB] (10 MBps) [2024-11-26T23:09:20.890Z] Copying: 728/1024 [MB] (10 MBps) [2024-11-26T23:09:21.835Z] Copying: 739/1024 [MB] (10 MBps) [2024-11-26T23:09:22.779Z] Copying: 753/1024 [MB] (14 MBps) [2024-11-26T23:09:23.728Z] Copying: 768/1024 [MB] (14 MBps) [2024-11-26T23:09:24.672Z] Copying: 788/1024 [MB] (19 MBps) [2024-11-26T23:09:26.063Z] Copying: 807/1024 [MB] (19 MBps) [2024-11-26T23:09:26.635Z] Copying: 823/1024 [MB] (15 MBps) [2024-11-26T23:09:28.021Z] Copying: 836/1024 [MB] (13 MBps) [2024-11-26T23:09:28.959Z] Copying: 854/1024 [MB] (17 MBps) [2024-11-26T23:09:29.901Z] Copying: 884/1024 [MB] (30 MBps) [2024-11-26T23:09:30.844Z] Copying: 904/1024 [MB] (19 MBps) [2024-11-26T23:09:31.789Z] Copying: 924/1024 [MB] (20 MBps) [2024-11-26T23:09:32.734Z] Copying: 938/1024 [MB] (13 MBps) [2024-11-26T23:09:33.674Z] Copying: 954/1024 [MB] (16 MBps) [2024-11-26T23:09:35.075Z] Copying: 965/1024 [MB] (11 MBps) [2024-11-26T23:09:35.646Z] Copying: 983/1024 [MB] (17 MBps) [2024-11-26T23:09:37.029Z] Copying: 1005/1024 [MB] (21 MBps) [2024-11-26T23:09:37.029Z] Copying: 1022/1024 [MB] (17 MBps) [2024-11-26T23:09:37.029Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-26 23:09:36.819267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.902 [2024-11-26 23:09:36.819683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:57.902 [2024-11-26 23:09:36.819970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:57.902 [2024-11-26 23:09:36.820011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.902 [2024-11-26 23:09:36.820084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:57.902 [2024-11-26 23:09:36.821258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.902 [2024-11-26 23:09:36.821374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:57.902 [2024-11-26 23:09:36.821396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:24:57.902 [2024-11-26 23:09:36.821414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.902 [2024-11-26 23:09:36.821897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.902 [2024-11-26 23:09:36.822004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:57.902 [2024-11-26 23:09:36.822028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:24:57.902 [2024-11-26 23:09:36.822044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.902 [2024-11-26 23:09:36.830720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.902 [2024-11-26 23:09:36.830775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:57.902 [2024-11-26 23:09:36.830790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.645 ms 00:24:57.902 [2024-11-26 23:09:36.830807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.902 [2024-11-26 23:09:36.837015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.902 [2024-11-26 23:09:36.837204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:57.902 [2024-11-26 23:09:36.837227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:24:57.902 [2024-11-26 23:09:36.837236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.902 [2024-11-26 23:09:36.840454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.902 [2024-11-26 23:09:36.840506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:57.902 [2024-11-26 23:09:36.840518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:24:57.902 [2024-11-26 23:09:36.840527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.902 [2024-11-26 23:09:36.845437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.902 [2024-11-26 23:09:36.845491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:57.902 [2024-11-26 23:09:36.845519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.860 ms 00:24:57.902 [2024-11-26 23:09:36.845527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.174 [2024-11-26 23:09:37.129896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.175 [2024-11-26 23:09:37.129969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:58.175 [2024-11-26 23:09:37.129986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 284.311 ms 00:24:58.175 [2024-11-26 23:09:37.129995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.175 [2024-11-26 23:09:37.133640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.175 [2024-11-26 23:09:37.133694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:58.175 [2024-11-26 23:09:37.133725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:24:58.175 [2024-11-26 23:09:37.133732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.175 [2024-11-26 23:09:37.136879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.175 [2024-11-26 23:09:37.136929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:58.175 [2024-11-26 23:09:37.136940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:24:58.175 [2024-11-26 23:09:37.136948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.175 [2024-11-26 23:09:37.139748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.175 [2024-11-26 23:09:37.139961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:58.175 [2024-11-26 23:09:37.139981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:24:58.175 [2024-11-26 23:09:37.139988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.175 [2024-11-26 23:09:37.142234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.175 [2024-11-26 23:09:37.142287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:58.175 [2024-11-26 23:09:37.142317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:24:58.175 [2024-11-26 23:09:37.142325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.175 [2024-11-26 23:09:37.142367] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:58.175 [2024-11-26 23:09:37.142385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:58.175 [2024-11-26 23:09:37.142397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:58.175 [2024-11-26 23:09:37.142406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:58.175 [2024-11-26 23:09:37.142414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:58.175 [2024-11-26 23:09:37.142422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:58.175 [2024-11-26 23:09:37.142431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:58.175 [2024-11-26 23:09:37.142438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:58.176 [2024-11-26 23:09:37.142647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:58.177 [2024-11-26 23:09:37.142811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.142998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:58.178 [2024-11-26 23:09:37.143091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:58.179 [2024-11-26 23:09:37.143197] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:58.179 [2024-11-26 23:09:37.143206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 30f64c7f-41c7-49e6-993d-bc2c7c41b945 00:24:58.179 [2024-11-26 23:09:37.143227] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:58.179 [2024-11-26 23:09:37.143240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25792 00:24:58.179 [2024-11-26 23:09:37.143248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24832 00:24:58.179 [2024-11-26 23:09:37.143257] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0387 00:24:58.179 [2024-11-26 23:09:37.143265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:58.179 [2024-11-26 23:09:37.143274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:58.179 [2024-11-26 23:09:37.143281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:58.179 [2024-11-26 23:09:37.143289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:58.179 [2024-11-26 23:09:37.143596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:58.179 [2024-11-26 23:09:37.143633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.180 [2024-11-26 23:09:37.143656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:58.180 [2024-11-26 23:09:37.143690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:24:58.180 [2024-11-26 23:09:37.143710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.180 [2024-11-26 23:09:37.146953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.180 [2024-11-26 23:09:37.147111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:58.180 [2024-11-26 23:09:37.147129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:24:58.180 [2024-11-26 23:09:37.147137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.180 [2024-11-26 23:09:37.147328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.180 [2024-11-26 23:09:37.147346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:58.180 [2024-11-26 23:09:37.147357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:24:58.180 [2024-11-26 23:09:37.147371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.180 [2024-11-26 23:09:37.157745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.180 [2024-11-26 23:09:37.157949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.180 [2024-11-26 23:09:37.157984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.180 [2024-11-26 23:09:37.157992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.180 [2024-11-26 23:09:37.158057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.180 [2024-11-26 23:09:37.158066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.180 [2024-11-26 23:09:37.158080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.180 [2024-11-26 23:09:37.158092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.180 [2024-11-26 23:09:37.158161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.180 [2024-11-26 23:09:37.158172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.180 [2024-11-26 23:09:37.158181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.180 [2024-11-26 23:09:37.158189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.180 [2024-11-26 23:09:37.158206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.180 [2024-11-26 23:09:37.158214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.180 [2024-11-26 23:09:37.158223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.180 [2024-11-26 23:09:37.158230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.180 [2024-11-26 23:09:37.178171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.180 [2024-11-26 23:09:37.178238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.180 [2024-11-26 23:09:37.178252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.178263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.181 [2024-11-26 23:09:37.194158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.181 [2024-11-26 23:09:37.194173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.194201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.181 [2024-11-26 23:09:37.194278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.181 [2024-11-26 23:09:37.194288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.194348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.181 [2024-11-26 23:09:37.194399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.181 [2024-11-26 23:09:37.194409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.194418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.181 [2024-11-26 23:09:37.194530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.181 [2024-11-26 23:09:37.194539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.194548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.181 [2024-11-26 23:09:37.194595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:58.181 [2024-11-26 23:09:37.194604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.194613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.181 [2024-11-26 23:09:37.194691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.181 [2024-11-26 23:09:37.194706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.194715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.181 [2024-11-26 23:09:37.194791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.181 [2024-11-26 23:09:37.194800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.181 [2024-11-26 23:09:37.194810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.181 [2024-11-26 23:09:37.194992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.692 ms, result 0 00:24:58.451 00:24:58.451 00:24:58.451 23:09:37 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:00.988 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:00.988 Process with pid 90167 is not found 00:25:00.988 Remove shared memory files 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90167 00:25:00.988 23:09:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90167 ']' 00:25:00.988 23:09:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90167 00:25:00.988 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90167) - No such process 00:25:00.988 23:09:39 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 90167 is not found' 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:00.988 23:09:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:00.988 00:25:00.988 real 4m21.737s 00:25:00.988 user 4m8.898s 00:25:00.988 sys 0m12.922s 00:25:00.988 23:09:39 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.988 23:09:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:00.988 ************************************ 00:25:00.988 END TEST ftl_restore 00:25:00.988 ************************************ 00:25:00.988 23:09:39 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:00.988 23:09:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:00.988 23:09:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.988 23:09:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:00.988 ************************************ 00:25:00.988 START TEST ftl_dirty_shutdown 00:25:00.988 ************************************ 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:00.988 * Looking for test storage... 00:25:00.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:00.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.988 --rc genhtml_branch_coverage=1 00:25:00.988 --rc genhtml_function_coverage=1 00:25:00.988 --rc genhtml_legend=1 00:25:00.988 --rc geninfo_all_blocks=1 00:25:00.988 --rc geninfo_unexecuted_blocks=1 00:25:00.988 00:25:00.988 ' 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:00.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.988 --rc genhtml_branch_coverage=1 00:25:00.988 --rc genhtml_function_coverage=1 00:25:00.988 --rc genhtml_legend=1 00:25:00.988 --rc geninfo_all_blocks=1 00:25:00.988 --rc geninfo_unexecuted_blocks=1 00:25:00.988 00:25:00.988 ' 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:00.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.988 --rc genhtml_branch_coverage=1 00:25:00.988 --rc genhtml_function_coverage=1 00:25:00.988 --rc genhtml_legend=1 00:25:00.988 --rc geninfo_all_blocks=1 00:25:00.988 --rc geninfo_unexecuted_blocks=1 00:25:00.988 00:25:00.988 ' 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:00.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.988 --rc genhtml_branch_coverage=1 00:25:00.988 --rc genhtml_function_coverage=1 00:25:00.988 --rc genhtml_legend=1 00:25:00.988 --rc geninfo_all_blocks=1 00:25:00.988 --rc geninfo_unexecuted_blocks=1 00:25:00.988 00:25:00.988 ' 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.988 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92923 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92923 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92923 ']' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:00.989 23:09:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:00.989 [2024-11-26 23:09:40.025887] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:25:00.989 [2024-11-26 23:09:40.026651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92923 ] 00:25:01.249 [2024-11-26 23:09:40.166843] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:01.250 [2024-11-26 23:09:40.193443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.250 [2024-11-26 23:09:40.234831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:01.826 23:09:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:02.087 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:02.348 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:02.348 { 00:25:02.348 "name": "nvme0n1", 00:25:02.348 "aliases": [ 00:25:02.348 "d6a2a2e7-9aa9-4328-a902-8667f5e9924a" 00:25:02.348 ], 00:25:02.348 "product_name": "NVMe disk", 00:25:02.348 "block_size": 4096, 00:25:02.348 "num_blocks": 1310720, 00:25:02.348 "uuid": "d6a2a2e7-9aa9-4328-a902-8667f5e9924a", 00:25:02.348 "numa_id": -1, 00:25:02.348 "assigned_rate_limits": { 00:25:02.348 "rw_ios_per_sec": 0, 00:25:02.348 "rw_mbytes_per_sec": 0, 00:25:02.348 "r_mbytes_per_sec": 0, 00:25:02.348 "w_mbytes_per_sec": 0 00:25:02.348 }, 00:25:02.348 "claimed": true, 00:25:02.348 "claim_type": "read_many_write_one", 00:25:02.348 "zoned": false, 00:25:02.348 "supported_io_types": { 00:25:02.348 "read": true, 00:25:02.348 "write": true, 00:25:02.348 "unmap": true, 00:25:02.348 "flush": true, 00:25:02.348 "reset": true, 00:25:02.348 "nvme_admin": true, 00:25:02.348 "nvme_io": true, 00:25:02.348 "nvme_io_md": false, 00:25:02.348 "write_zeroes": true, 00:25:02.348 "zcopy": false, 00:25:02.348 "get_zone_info": false, 00:25:02.348 "zone_management": false, 00:25:02.348 "zone_append": false, 00:25:02.348 "compare": true, 00:25:02.348 "compare_and_write": false, 00:25:02.348 "abort": true, 00:25:02.348 "seek_hole": false, 00:25:02.348 "seek_data": false, 00:25:02.348 "copy": true, 00:25:02.348 "nvme_iov_md": false 00:25:02.348 }, 00:25:02.348 "driver_specific": { 00:25:02.348 "nvme": [ 00:25:02.348 { 00:25:02.348 "pci_address": "0000:00:11.0", 00:25:02.348 "trid": { 00:25:02.348 "trtype": "PCIe", 00:25:02.348 "traddr": "0000:00:11.0" 00:25:02.348 }, 00:25:02.348 "ctrlr_data": { 00:25:02.348 "cntlid": 0, 00:25:02.348 "vendor_id": "0x1b36", 00:25:02.348 "model_number": "QEMU NVMe Ctrl", 00:25:02.348 "serial_number": "12341", 00:25:02.348 "firmware_revision": "8.0.0", 00:25:02.348 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:02.348 "oacs": { 00:25:02.348 "security": 0, 00:25:02.349 "format": 1, 00:25:02.349 "firmware": 0, 00:25:02.349 "ns_manage": 1 00:25:02.349 }, 00:25:02.349 "multi_ctrlr": false, 00:25:02.349 "ana_reporting": false 00:25:02.349 }, 00:25:02.349 "vs": { 00:25:02.349 "nvme_version": "1.4" 00:25:02.349 }, 00:25:02.349 "ns_data": { 00:25:02.349 "id": 1, 00:25:02.349 "can_share": false 00:25:02.349 } 00:25:02.349 } 00:25:02.349 ], 00:25:02.349 "mp_policy": "active_passive" 00:25:02.349 } 00:25:02.349 } 00:25:02.349 ]' 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:02.349 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:02.609 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=93afc4aa-3bf9-46e4-83ab-670e9550127e 00:25:02.609 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:02.609 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93afc4aa-3bf9-46e4-83ab-670e9550127e 00:25:02.870 23:09:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e45abb66-8511-4278-9a4a-0ceb4ba2f7a7 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e45abb66-8511-4278-9a4a-0ceb4ba2f7a7 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:03.129 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.388 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:03.388 { 00:25:03.388 "name": "cdbe6294-41f9-426f-b4ae-398f55e21f3c", 00:25:03.388 "aliases": [ 00:25:03.388 "lvs/nvme0n1p0" 00:25:03.388 ], 00:25:03.388 "product_name": "Logical Volume", 00:25:03.388 "block_size": 4096, 00:25:03.388 "num_blocks": 26476544, 00:25:03.388 "uuid": "cdbe6294-41f9-426f-b4ae-398f55e21f3c", 00:25:03.388 "assigned_rate_limits": { 00:25:03.388 "rw_ios_per_sec": 0, 00:25:03.388 "rw_mbytes_per_sec": 0, 00:25:03.388 "r_mbytes_per_sec": 0, 00:25:03.388 "w_mbytes_per_sec": 0 00:25:03.388 }, 00:25:03.388 "claimed": false, 00:25:03.388 "zoned": false, 00:25:03.388 "supported_io_types": { 00:25:03.388 "read": true, 00:25:03.388 "write": true, 00:25:03.388 "unmap": true, 00:25:03.388 "flush": false, 00:25:03.388 "reset": true, 00:25:03.388 "nvme_admin": false, 00:25:03.388 "nvme_io": false, 00:25:03.388 "nvme_io_md": false, 00:25:03.388 "write_zeroes": true, 00:25:03.388 "zcopy": false, 00:25:03.388 "get_zone_info": false, 00:25:03.388 "zone_management": false, 00:25:03.388 "zone_append": false, 00:25:03.388 "compare": false, 00:25:03.388 "compare_and_write": false, 00:25:03.388 "abort": false, 00:25:03.388 "seek_hole": true, 00:25:03.388 "seek_data": true, 00:25:03.388 "copy": false, 00:25:03.388 "nvme_iov_md": false 00:25:03.388 }, 00:25:03.388 "driver_specific": { 00:25:03.388 "lvol": { 00:25:03.388 "lvol_store_uuid": "e45abb66-8511-4278-9a4a-0ceb4ba2f7a7", 00:25:03.388 "base_bdev": "nvme0n1", 00:25:03.388 "thin_provision": true, 00:25:03.388 "num_allocated_clusters": 0, 00:25:03.388 "snapshot": false, 00:25:03.388 "clone": false, 00:25:03.388 "esnap_clone": false 00:25:03.388 } 00:25:03.388 } 00:25:03.388 } 00:25:03.388 ]' 00:25:03.388 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:03.388 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:03.388 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:03.648 23:09:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.916 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.916 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:03.917 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:03.917 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:03.917 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:03.917 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:03.917 { 00:25:03.917 "name": "cdbe6294-41f9-426f-b4ae-398f55e21f3c", 00:25:03.917 "aliases": [ 00:25:03.917 "lvs/nvme0n1p0" 00:25:03.917 ], 00:25:03.917 "product_name": "Logical Volume", 00:25:03.917 "block_size": 4096, 00:25:03.917 "num_blocks": 26476544, 00:25:03.917 "uuid": "cdbe6294-41f9-426f-b4ae-398f55e21f3c", 00:25:03.917 "assigned_rate_limits": { 00:25:03.917 "rw_ios_per_sec": 0, 00:25:03.917 "rw_mbytes_per_sec": 0, 00:25:03.917 "r_mbytes_per_sec": 0, 00:25:03.917 "w_mbytes_per_sec": 0 00:25:03.917 }, 00:25:03.917 "claimed": false, 00:25:03.917 "zoned": false, 00:25:03.917 "supported_io_types": { 00:25:03.917 "read": true, 00:25:03.917 "write": true, 00:25:03.917 "unmap": true, 00:25:03.917 "flush": false, 00:25:03.917 "reset": true, 00:25:03.917 "nvme_admin": false, 00:25:03.917 "nvme_io": false, 00:25:03.917 "nvme_io_md": false, 00:25:03.917 "write_zeroes": true, 00:25:03.917 "zcopy": false, 00:25:03.917 "get_zone_info": false, 00:25:03.917 "zone_management": false, 00:25:03.917 "zone_append": false, 00:25:03.917 "compare": false, 00:25:03.917 "compare_and_write": false, 00:25:03.917 "abort": false, 00:25:03.917 "seek_hole": true, 00:25:03.917 "seek_data": true, 00:25:03.917 "copy": false, 00:25:03.917 "nvme_iov_md": false 00:25:03.917 }, 00:25:03.917 "driver_specific": { 00:25:03.917 "lvol": { 00:25:03.917 "lvol_store_uuid": "e45abb66-8511-4278-9a4a-0ceb4ba2f7a7", 00:25:03.917 "base_bdev": "nvme0n1", 00:25:03.917 "thin_provision": true, 00:25:03.917 "num_allocated_clusters": 0, 00:25:03.917 "snapshot": false, 00:25:03.917 "clone": false, 00:25:03.917 "esnap_clone": false 00:25:03.917 } 00:25:03.917 } 00:25:03.917 } 00:25:03.917 ]' 00:25:03.917 23:09:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:03.917 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:03.917 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:04.176 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdbe6294-41f9-426f-b4ae-398f55e21f3c 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:04.436 { 00:25:04.436 "name": "cdbe6294-41f9-426f-b4ae-398f55e21f3c", 00:25:04.436 "aliases": [ 00:25:04.436 "lvs/nvme0n1p0" 00:25:04.436 ], 00:25:04.436 "product_name": "Logical Volume", 00:25:04.436 "block_size": 4096, 00:25:04.436 "num_blocks": 26476544, 00:25:04.436 "uuid": "cdbe6294-41f9-426f-b4ae-398f55e21f3c", 00:25:04.436 "assigned_rate_limits": { 00:25:04.436 "rw_ios_per_sec": 0, 00:25:04.436 "rw_mbytes_per_sec": 0, 00:25:04.436 "r_mbytes_per_sec": 0, 00:25:04.436 "w_mbytes_per_sec": 0 00:25:04.436 }, 00:25:04.436 "claimed": false, 00:25:04.436 "zoned": false, 00:25:04.436 "supported_io_types": { 00:25:04.436 "read": true, 00:25:04.436 "write": true, 00:25:04.436 "unmap": true, 00:25:04.436 "flush": false, 00:25:04.436 "reset": true, 00:25:04.436 "nvme_admin": false, 00:25:04.436 "nvme_io": false, 00:25:04.436 "nvme_io_md": false, 00:25:04.436 "write_zeroes": true, 00:25:04.436 "zcopy": false, 00:25:04.436 "get_zone_info": false, 00:25:04.436 "zone_management": false, 00:25:04.436 "zone_append": false, 00:25:04.436 "compare": false, 00:25:04.436 "compare_and_write": false, 00:25:04.436 "abort": false, 00:25:04.436 "seek_hole": true, 00:25:04.436 "seek_data": true, 00:25:04.436 "copy": false, 00:25:04.436 "nvme_iov_md": false 00:25:04.436 }, 00:25:04.436 "driver_specific": { 00:25:04.436 "lvol": { 00:25:04.436 "lvol_store_uuid": "e45abb66-8511-4278-9a4a-0ceb4ba2f7a7", 00:25:04.436 "base_bdev": "nvme0n1", 00:25:04.436 "thin_provision": true, 00:25:04.436 "num_allocated_clusters": 0, 00:25:04.436 "snapshot": false, 00:25:04.436 "clone": false, 00:25:04.436 "esnap_clone": false 00:25:04.436 } 00:25:04.436 } 00:25:04.436 } 00:25:04.436 ]' 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cdbe6294-41f9-426f-b4ae-398f55e21f3c --l2p_dram_limit 10' 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:04.436 23:09:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cdbe6294-41f9-426f-b4ae-398f55e21f3c --l2p_dram_limit 10 -c nvc0n1p0 00:25:04.696 [2024-11-26 23:09:43.729146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.729188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:04.696 [2024-11-26 23:09:43.729206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:04.696 [2024-11-26 23:09:43.729213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.729265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.729275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.696 [2024-11-26 23:09:43.729286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:04.696 [2024-11-26 23:09:43.729292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.729324] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:04.696 [2024-11-26 23:09:43.729832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:04.696 [2024-11-26 23:09:43.729864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.729872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.696 [2024-11-26 23:09:43.729884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:25:04.696 [2024-11-26 23:09:43.729890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.729958] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 16fdd1ae-09cf-4996-8826-278dc9b2a035 00:25:04.696 [2024-11-26 23:09:43.731271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.731307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:04.696 [2024-11-26 23:09:43.731317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:04.696 [2024-11-26 23:09:43.731325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.738321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.738344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.696 [2024-11-26 23:09:43.738352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:25:04.696 [2024-11-26 23:09:43.738364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.738438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.738448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.696 [2024-11-26 23:09:43.738455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:04.696 [2024-11-26 23:09:43.738466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.738511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.738521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:04.696 [2024-11-26 23:09:43.738527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:04.696 [2024-11-26 23:09:43.738535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.738552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:04.696 [2024-11-26 23:09:43.740188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.740210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.696 [2024-11-26 23:09:43.740220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:25:04.696 [2024-11-26 23:09:43.740227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.740255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.740263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:04.696 [2024-11-26 23:09:43.740273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:04.696 [2024-11-26 23:09:43.740280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.740306] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:04.696 [2024-11-26 23:09:43.740420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:04.696 [2024-11-26 23:09:43.740431] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:04.696 [2024-11-26 23:09:43.740441] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:04.696 [2024-11-26 23:09:43.740459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:04.696 [2024-11-26 23:09:43.740466] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:04.696 [2024-11-26 23:09:43.740479] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:04.696 [2024-11-26 23:09:43.740485] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:04.696 [2024-11-26 23:09:43.740492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:04.696 [2024-11-26 23:09:43.740497] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:04.696 [2024-11-26 23:09:43.740504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.740510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:04.696 [2024-11-26 23:09:43.740517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:25:04.696 [2024-11-26 23:09:43.740523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.740588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.696 [2024-11-26 23:09:43.740594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:04.696 [2024-11-26 23:09:43.740603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:04.696 [2024-11-26 23:09:43.740610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.696 [2024-11-26 23:09:43.740692] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:04.696 [2024-11-26 23:09:43.740698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:04.696 [2024-11-26 23:09:43.740706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.696 [2024-11-26 23:09:43.740711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.696 [2024-11-26 23:09:43.740718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:04.696 [2024-11-26 23:09:43.740723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:04.696 [2024-11-26 23:09:43.740729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:04.696 [2024-11-26 23:09:43.740735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:04.696 [2024-11-26 23:09:43.740742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.697 [2024-11-26 23:09:43.740754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:04.697 [2024-11-26 23:09:43.740759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:04.697 [2024-11-26 23:09:43.740767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.697 [2024-11-26 23:09:43.740772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:04.697 [2024-11-26 23:09:43.740778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:04.697 [2024-11-26 23:09:43.740783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:04.697 [2024-11-26 23:09:43.740794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:04.697 [2024-11-26 23:09:43.740800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:04.697 [2024-11-26 23:09:43.740812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.697 [2024-11-26 23:09:43.740823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:04.697 [2024-11-26 23:09:43.740828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.697 [2024-11-26 23:09:43.740839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:04.697 [2024-11-26 23:09:43.740846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.697 [2024-11-26 23:09:43.740859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:04.697 [2024-11-26 23:09:43.740864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.697 [2024-11-26 23:09:43.740875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:04.697 [2024-11-26 23:09:43.740881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.697 [2024-11-26 23:09:43.740891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:04.697 [2024-11-26 23:09:43.740896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:04.697 [2024-11-26 23:09:43.740903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.697 [2024-11-26 23:09:43.740908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:04.697 [2024-11-26 23:09:43.740914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:04.697 [2024-11-26 23:09:43.740922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:04.697 [2024-11-26 23:09:43.740933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:04.697 [2024-11-26 23:09:43.740940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740945] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:04.697 [2024-11-26 23:09:43.740954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:04.697 [2024-11-26 23:09:43.740959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.697 [2024-11-26 23:09:43.740966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.697 [2024-11-26 23:09:43.740973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:04.697 [2024-11-26 23:09:43.740980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:04.697 [2024-11-26 23:09:43.740985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:04.697 [2024-11-26 23:09:43.740991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:04.697 [2024-11-26 23:09:43.740996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:04.697 [2024-11-26 23:09:43.741002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:04.697 [2024-11-26 23:09:43.741010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:04.697 [2024-11-26 23:09:43.741018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.697 [2024-11-26 23:09:43.741025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:04.697 [2024-11-26 23:09:43.741031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:04.697 [2024-11-26 23:09:43.741036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:04.697 [2024-11-26 23:09:43.741043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:04.697 [2024-11-26 23:09:43.741048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:04.697 [2024-11-26 23:09:43.741058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:04.697 [2024-11-26 23:09:43.741063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:04.697 [2024-11-26 23:09:43.741070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:04.697 [2024-11-26 23:09:43.741075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:04.697 [2024-11-26 23:09:43.741082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:04.697 [2024-11-26 23:09:43.741087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:04.697 [2024-11-26 23:09:43.741093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:04.697 [2024-11-26 23:09:43.741098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:04.697 [2024-11-26 23:09:43.741105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:04.697 [2024-11-26 23:09:43.741110] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:04.697 [2024-11-26 23:09:43.741117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.697 [2024-11-26 23:09:43.741124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:04.697 [2024-11-26 23:09:43.741131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:04.697 [2024-11-26 23:09:43.741136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:04.697 [2024-11-26 23:09:43.741143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:04.697 [2024-11-26 23:09:43.741149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.697 [2024-11-26 23:09:43.741157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:04.697 [2024-11-26 23:09:43.741163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:25:04.697 [2024-11-26 23:09:43.741170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.697 [2024-11-26 23:09:43.741198] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:04.697 [2024-11-26 23:09:43.741208] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:09.014 [2024-11-26 23:09:47.185085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.014 [2024-11-26 23:09:47.185197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:09.014 [2024-11-26 23:09:47.185226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3443.869 ms 00:25:09.014 [2024-11-26 23:09:47.185240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.014 [2024-11-26 23:09:47.205850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.014 [2024-11-26 23:09:47.205918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.014 [2024-11-26 23:09:47.205934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.451 ms 00:25:09.014 [2024-11-26 23:09:47.205956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.014 [2024-11-26 23:09:47.206122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.014 [2024-11-26 23:09:47.206136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.014 [2024-11-26 23:09:47.206146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:09.014 [2024-11-26 23:09:47.206158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.014 [2024-11-26 23:09:47.223919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.014 [2024-11-26 23:09:47.223984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.014 [2024-11-26 23:09:47.224002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.721 ms 00:25:09.014 [2024-11-26 23:09:47.224084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.014 [2024-11-26 23:09:47.224131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.014 [2024-11-26 23:09:47.224152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.014 [2024-11-26 23:09:47.224163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.014 [2024-11-26 23:09:47.224175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.014 [2024-11-26 23:09:47.224931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.014 [2024-11-26 23:09:47.224974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.015 [2024-11-26 23:09:47.224986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:25:09.015 [2024-11-26 23:09:47.225004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.225132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.225145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.015 [2024-11-26 23:09:47.225154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:09.015 [2024-11-26 23:09:47.225173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.237077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.237131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.015 [2024-11-26 23:09:47.237144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.849 ms 00:25:09.015 [2024-11-26 23:09:47.237156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.259290] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.015 [2024-11-26 23:09:47.264475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.264517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.015 [2024-11-26 23:09:47.264534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.204 ms 00:25:09.015 [2024-11-26 23:09:47.264543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.357328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.357393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:09.015 [2024-11-26 23:09:47.357417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.696 ms 00:25:09.015 [2024-11-26 23:09:47.357427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.357645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.357657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.015 [2024-11-26 23:09:47.357669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:25:09.015 [2024-11-26 23:09:47.357678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.364706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.364760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:09.015 [2024-11-26 23:09:47.364776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:25:09.015 [2024-11-26 23:09:47.364786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.370664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.370712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:09.015 [2024-11-26 23:09:47.370726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.841 ms 00:25:09.015 [2024-11-26 23:09:47.370734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.371107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.371117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.015 [2024-11-26 23:09:47.371132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:25:09.015 [2024-11-26 23:09:47.371237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.419249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.419325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:09.015 [2024-11-26 23:09:47.419341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.963 ms 00:25:09.015 [2024-11-26 23:09:47.419350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.428020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.428074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:09.015 [2024-11-26 23:09:47.428090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.575 ms 00:25:09.015 [2024-11-26 23:09:47.428099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.434769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.434817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:09.015 [2024-11-26 23:09:47.434831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.608 ms 00:25:09.015 [2024-11-26 23:09:47.434839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.442024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.442076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.015 [2024-11-26 23:09:47.442094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.125 ms 00:25:09.015 [2024-11-26 23:09:47.442103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.442166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.442177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.015 [2024-11-26 23:09:47.442189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.015 [2024-11-26 23:09:47.442206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.442363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.015 [2024-11-26 23:09:47.442375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.015 [2024-11-26 23:09:47.442391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:09.015 [2024-11-26 23:09:47.442400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.015 [2024-11-26 23:09:47.444047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3714.332 ms, result 0 00:25:09.015 { 00:25:09.015 "name": "ftl0", 00:25:09.015 "uuid": "16fdd1ae-09cf-4996-8826-278dc9b2a035" 00:25:09.015 } 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:09.015 /dev/nbd0 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:09.015 1+0 records in 00:25:09.015 1+0 records out 00:25:09.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622687 s, 6.6 MB/s 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:09.015 23:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:09.015 [2024-11-26 23:09:48.000980] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:25:09.015 [2024-11-26 23:09:48.001113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93061 ] 00:25:09.015 [2024-11-26 23:09:48.137093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:09.277 [2024-11-26 23:09:48.166078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.278 [2024-11-26 23:09:48.194827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.225  [2024-11-26T23:09:50.295Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-26T23:09:51.675Z] Copying: 391/1024 [MB] (201 MBps) [2024-11-26T23:09:52.609Z] Copying: 654/1024 [MB] (263 MBps) [2024-11-26T23:09:52.867Z] Copying: 913/1024 [MB] (258 MBps) [2024-11-26T23:09:52.867Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:25:13.740 00:25:13.740 23:09:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:15.643 23:09:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:15.902 [2024-11-26 23:09:54.828576] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:25:15.902 [2024-11-26 23:09:54.828702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93137 ] 00:25:15.902 [2024-11-26 23:09:54.962436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:15.902 [2024-11-26 23:09:54.996690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.902 [2024-11-26 23:09:55.016170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.301  [2024-11-26T23:09:57.371Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-26T23:09:58.321Z] Copying: 33/1024 [MB] (17 MBps) [2024-11-26T23:09:59.279Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-26T23:10:00.216Z] Copying: 59/1024 [MB] (10 MBps) [2024-11-26T23:10:01.157Z] Copying: 78/1024 [MB] (18 MBps) [2024-11-26T23:10:02.097Z] Copying: 112/1024 [MB] (34 MBps) [2024-11-26T23:10:03.484Z] Copying: 131/1024 [MB] (19 MBps) [2024-11-26T23:10:04.424Z] Copying: 150/1024 [MB] (18 MBps) [2024-11-26T23:10:05.364Z] Copying: 167/1024 [MB] (17 MBps) [2024-11-26T23:10:06.307Z] Copying: 187/1024 [MB] (19 MBps) [2024-11-26T23:10:07.261Z] Copying: 207/1024 [MB] (19 MBps) [2024-11-26T23:10:08.209Z] Copying: 225/1024 [MB] (18 MBps) [2024-11-26T23:10:09.153Z] Copying: 242/1024 [MB] (17 MBps) [2024-11-26T23:10:10.094Z] Copying: 260/1024 [MB] (17 MBps) [2024-11-26T23:10:11.481Z] Copying: 279/1024 [MB] (19 MBps) [2024-11-26T23:10:12.418Z] Copying: 297/1024 [MB] (18 MBps) [2024-11-26T23:10:13.360Z] Copying: 331/1024 [MB] (33 MBps) [2024-11-26T23:10:14.303Z] Copying: 349/1024 [MB] (17 MBps) [2024-11-26T23:10:15.245Z] Copying: 367/1024 [MB] (18 MBps) [2024-11-26T23:10:16.187Z] Copying: 384/1024 [MB] (17 MBps) [2024-11-26T23:10:17.153Z] Copying: 402/1024 [MB] (17 MBps) [2024-11-26T23:10:18.098Z] Copying: 419/1024 [MB] (17 MBps) [2024-11-26T23:10:19.483Z] Copying: 434/1024 [MB] (15 MBps) [2024-11-26T23:10:20.096Z] Copying: 453/1024 [MB] (18 MBps) [2024-11-26T23:10:21.080Z] Copying: 470/1024 [MB] (17 MBps) [2024-11-26T23:10:22.461Z] Copying: 492/1024 [MB] (21 MBps) [2024-11-26T23:10:23.428Z] Copying: 520/1024 [MB] (28 MBps) [2024-11-26T23:10:24.368Z] Copying: 535/1024 [MB] (14 MBps) [2024-11-26T23:10:25.321Z] Copying: 552/1024 [MB] (17 MBps) [2024-11-26T23:10:26.273Z] Copying: 568/1024 [MB] (15 MBps) [2024-11-26T23:10:27.218Z] Copying: 586/1024 [MB] (18 MBps) [2024-11-26T23:10:28.161Z] Copying: 603/1024 [MB] (17 MBps) [2024-11-26T23:10:29.099Z] Copying: 621/1024 [MB] (17 MBps) [2024-11-26T23:10:30.476Z] Copying: 640/1024 [MB] (18 MBps) [2024-11-26T23:10:31.417Z] Copying: 656/1024 [MB] (16 MBps) [2024-11-26T23:10:32.353Z] Copying: 674/1024 [MB] (18 MBps) [2024-11-26T23:10:33.297Z] Copying: 703/1024 [MB] (28 MBps) [2024-11-26T23:10:34.252Z] Copying: 724/1024 [MB] (20 MBps) [2024-11-26T23:10:35.193Z] Copying: 740/1024 [MB] (16 MBps) [2024-11-26T23:10:36.133Z] Copying: 759/1024 [MB] (18 MBps) [2024-11-26T23:10:37.070Z] Copying: 778/1024 [MB] (18 MBps) [2024-11-26T23:10:38.454Z] Copying: 804/1024 [MB] (25 MBps) [2024-11-26T23:10:39.400Z] Copying: 823/1024 [MB] (19 MBps) [2024-11-26T23:10:40.341Z] Copying: 840/1024 [MB] (16 MBps) [2024-11-26T23:10:41.281Z] Copying: 857/1024 [MB] (16 MBps) [2024-11-26T23:10:42.224Z] Copying: 880/1024 [MB] (22 MBps) [2024-11-26T23:10:43.164Z] Copying: 899/1024 [MB] (19 MBps) [2024-11-26T23:10:44.104Z] Copying: 917/1024 [MB] (18 MBps) [2024-11-26T23:10:45.488Z] Copying: 933/1024 [MB] (15 MBps) [2024-11-26T23:10:46.429Z] Copying: 951/1024 [MB] (17 MBps) [2024-11-26T23:10:47.368Z] Copying: 967/1024 [MB] (15 MBps) [2024-11-26T23:10:48.310Z] Copying: 984/1024 [MB] (17 MBps) [2024-11-26T23:10:49.253Z] Copying: 1001/1024 [MB] (16 MBps) [2024-11-26T23:10:49.514Z] Copying: 1019/1024 [MB] (18 MBps) [2024-11-26T23:10:49.514Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:26:10.387 00:26:10.649 23:10:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:10.649 23:10:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:10.649 23:10:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:10.914 [2024-11-26 23:10:49.925820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.925885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:10.914 [2024-11-26 23:10:49.925902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:10.914 [2024-11-26 23:10:49.925915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.925946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:10.914 [2024-11-26 23:10:49.926957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.926995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:10.914 [2024-11-26 23:10:49.927010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:26:10.914 [2024-11-26 23:10:49.927021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.930120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.930168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:10.914 [2024-11-26 23:10:49.930183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:26:10.914 [2024-11-26 23:10:49.930192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.948195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.948228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:10.914 [2024-11-26 23:10:49.948240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.979 ms 00:26:10.914 [2024-11-26 23:10:49.948248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.954485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.954511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:10.914 [2024-11-26 23:10:49.954523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.199 ms 00:26:10.914 [2024-11-26 23:10:49.954532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.956544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.956572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:10.914 [2024-11-26 23:10:49.956583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:26:10.914 [2024-11-26 23:10:49.956591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.961752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.961781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:10.914 [2024-11-26 23:10:49.961792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.124 ms 00:26:10.914 [2024-11-26 23:10:49.961800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.961920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.961929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:10.914 [2024-11-26 23:10:49.961939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:10.914 [2024-11-26 23:10:49.961947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.964695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.964724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:10.914 [2024-11-26 23:10:49.964735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:26:10.914 [2024-11-26 23:10:49.964743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.967180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.967208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:10.914 [2024-11-26 23:10:49.967224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:26:10.914 [2024-11-26 23:10:49.967232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.969048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.969075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:10.914 [2024-11-26 23:10:49.969085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:26:10.914 [2024-11-26 23:10:49.969092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.970966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.914 [2024-11-26 23:10:49.970993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:10.914 [2024-11-26 23:10:49.971003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:26:10.914 [2024-11-26 23:10:49.971010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.914 [2024-11-26 23:10:49.971042] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:10.914 [2024-11-26 23:10:49.971057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:10.914 [2024-11-26 23:10:49.971069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:10.914 [2024-11-26 23:10:49.971077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:10.914 [2024-11-26 23:10:49.971089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:10.914 [2024-11-26 23:10:49.971096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:10.914 [2024-11-26 23:10:49.971105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:10.915 [2024-11-26 23:10:49.971886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:10.916 [2024-11-26 23:10:49.971896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:10.916 [2024-11-26 23:10:49.971904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:10.916 [2024-11-26 23:10:49.971913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:10.916 [2024-11-26 23:10:49.971920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:10.916 [2024-11-26 23:10:49.971931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:10.916 [2024-11-26 23:10:49.971946] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:10.916 [2024-11-26 23:10:49.971955] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 16fdd1ae-09cf-4996-8826-278dc9b2a035 00:26:10.916 [2024-11-26 23:10:49.971963] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:10.916 [2024-11-26 23:10:49.971971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:10.916 [2024-11-26 23:10:49.971978] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:10.916 [2024-11-26 23:10:49.971988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:10.916 [2024-11-26 23:10:49.971998] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:10.916 [2024-11-26 23:10:49.972009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:10.916 [2024-11-26 23:10:49.972016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:10.916 [2024-11-26 23:10:49.972024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:10.916 [2024-11-26 23:10:49.972031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:10.916 [2024-11-26 23:10:49.972039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.916 [2024-11-26 23:10:49.972049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:10.916 [2024-11-26 23:10:49.972059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:26:10.916 [2024-11-26 23:10:49.972066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:49.974083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.916 [2024-11-26 23:10:49.974100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:10.916 [2024-11-26 23:10:49.974112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.993 ms 00:26:10.916 [2024-11-26 23:10:49.974120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:49.974220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.916 [2024-11-26 23:10:49.974229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:10.916 [2024-11-26 23:10:49.974240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:26:10.916 [2024-11-26 23:10:49.974247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:49.981213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:49.981244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.916 [2024-11-26 23:10:49.981256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:49.981263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:49.981333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:49.981345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.916 [2024-11-26 23:10:49.981354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:49.981362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:49.981434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:49.981444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.916 [2024-11-26 23:10:49.981454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:49.981461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:49.981480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:49.981490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.916 [2024-11-26 23:10:49.981499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:49.981507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:49.994373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:49.994411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.916 [2024-11-26 23:10:49.994424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:49.994432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.004838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:50.004880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.916 [2024-11-26 23:10:50.004894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:50.004902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.004983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:50.004993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.916 [2024-11-26 23:10:50.005003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:50.005011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.005060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:50.005070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.916 [2024-11-26 23:10:50.005083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:50.005091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.005167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:50.005176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.916 [2024-11-26 23:10:50.005186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:50.005198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.005232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:50.005242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:10.916 [2024-11-26 23:10:50.005251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:50.005261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.005319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:50.005329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.916 [2024-11-26 23:10:50.005340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:50.005348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.005396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.916 [2024-11-26 23:10:50.005406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.916 [2024-11-26 23:10:50.005419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.916 [2024-11-26 23:10:50.005427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.916 [2024-11-26 23:10:50.005576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.717 ms, result 0 00:26:10.916 true 00:26:10.916 23:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92923 00:26:10.916 23:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92923 00:26:10.916 23:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:11.182 [2024-11-26 23:10:50.098733] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:26:11.182 [2024-11-26 23:10:50.098852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93712 ] 00:26:11.183 [2024-11-26 23:10:50.232269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:11.183 [2024-11-26 23:10:50.261396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.183 [2024-11-26 23:10:50.290202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.594  [2024-11-26T23:10:52.679Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-26T23:10:53.624Z] Copying: 425/1024 [MB] (238 MBps) [2024-11-26T23:10:54.557Z] Copying: 682/1024 [MB] (257 MBps) [2024-11-26T23:10:54.815Z] Copying: 933/1024 [MB] (250 MBps) [2024-11-26T23:10:55.085Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:26:15.958 00:26:15.958 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92923 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:15.958 23:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:15.958 [2024-11-26 23:10:55.013241] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:26:15.958 [2024-11-26 23:10:55.013373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93772 ] 00:26:16.216 [2024-11-26 23:10:55.146276] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:16.216 [2024-11-26 23:10:55.172044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.216 [2024-11-26 23:10:55.207384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.216 [2024-11-26 23:10:55.313329] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.216 [2024-11-26 23:10:55.313389] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.478 [2024-11-26 23:10:55.375631] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:16.478 [2024-11-26 23:10:55.375924] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:16.478 [2024-11-26 23:10:55.376613] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:16.478 [2024-11-26 23:10:55.568983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.569031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:16.478 [2024-11-26 23:10:55.569045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:16.478 [2024-11-26 23:10:55.569054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.569110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.569120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.478 [2024-11-26 23:10:55.569128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:16.478 [2024-11-26 23:10:55.569135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.569155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:16.478 [2024-11-26 23:10:55.569449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:16.478 [2024-11-26 23:10:55.569480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.569488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.478 [2024-11-26 23:10:55.569497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:26:16.478 [2024-11-26 23:10:55.569504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.570928] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:16.478 [2024-11-26 23:10:55.574174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.574211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:16.478 [2024-11-26 23:10:55.574221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:26:16.478 [2024-11-26 23:10:55.574236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.574309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.574322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:16.478 [2024-11-26 23:10:55.574330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:16.478 [2024-11-26 23:10:55.574341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.581161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.581192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.478 [2024-11-26 23:10:55.581202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.769 ms 00:26:16.478 [2024-11-26 23:10:55.581210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.581307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.581317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.478 [2024-11-26 23:10:55.581328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:26:16.478 [2024-11-26 23:10:55.581343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.581386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.581396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:16.478 [2024-11-26 23:10:55.581404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:16.478 [2024-11-26 23:10:55.581411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.581441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:16.478 [2024-11-26 23:10:55.583189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.583219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.478 [2024-11-26 23:10:55.583229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:26:16.478 [2024-11-26 23:10:55.583238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.583269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.583278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:16.478 [2024-11-26 23:10:55.583287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:16.478 [2024-11-26 23:10:55.583312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.583344] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:16.478 [2024-11-26 23:10:55.583366] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:16.478 [2024-11-26 23:10:55.583408] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:16.478 [2024-11-26 23:10:55.583424] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:16.478 [2024-11-26 23:10:55.583528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:16.478 [2024-11-26 23:10:55.583539] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:16.478 [2024-11-26 23:10:55.583550] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:16.478 [2024-11-26 23:10:55.583559] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:16.478 [2024-11-26 23:10:55.583568] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:16.478 [2024-11-26 23:10:55.583576] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:16.478 [2024-11-26 23:10:55.583584] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:16.478 [2024-11-26 23:10:55.583597] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:16.478 [2024-11-26 23:10:55.583605] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:16.478 [2024-11-26 23:10:55.583612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.583623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:16.478 [2024-11-26 23:10:55.583631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:26:16.478 [2024-11-26 23:10:55.583640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.583721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.478 [2024-11-26 23:10:55.583734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:16.478 [2024-11-26 23:10:55.583742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:16.478 [2024-11-26 23:10:55.583749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.478 [2024-11-26 23:10:55.583851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:16.478 [2024-11-26 23:10:55.583865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:16.478 [2024-11-26 23:10:55.583880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.478 [2024-11-26 23:10:55.583888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.478 [2024-11-26 23:10:55.583901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:16.478 [2024-11-26 23:10:55.583908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:16.478 [2024-11-26 23:10:55.583916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:16.478 [2024-11-26 23:10:55.583924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:16.479 [2024-11-26 23:10:55.583931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:16.479 [2024-11-26 23:10:55.583938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.479 [2024-11-26 23:10:55.583950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:16.479 [2024-11-26 23:10:55.583957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:16.479 [2024-11-26 23:10:55.583963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.479 [2024-11-26 23:10:55.583970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:16.479 [2024-11-26 23:10:55.583976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:16.479 [2024-11-26 23:10:55.583983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.479 [2024-11-26 23:10:55.583989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:16.479 [2024-11-26 23:10:55.583996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:16.479 [2024-11-26 23:10:55.584003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:16.479 [2024-11-26 23:10:55.584016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.479 [2024-11-26 23:10:55.584029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:16.479 [2024-11-26 23:10:55.584036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.479 [2024-11-26 23:10:55.584049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:16.479 [2024-11-26 23:10:55.584062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.479 [2024-11-26 23:10:55.584075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:16.479 [2024-11-26 23:10:55.584082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.479 [2024-11-26 23:10:55.584095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:16.479 [2024-11-26 23:10:55.584102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.479 [2024-11-26 23:10:55.584118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:16.479 [2024-11-26 23:10:55.584124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:16.479 [2024-11-26 23:10:55.584130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.479 [2024-11-26 23:10:55.584137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:16.479 [2024-11-26 23:10:55.584143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:16.479 [2024-11-26 23:10:55.584149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:16.479 [2024-11-26 23:10:55.584162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:16.479 [2024-11-26 23:10:55.584170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584176] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:16.479 [2024-11-26 23:10:55.584183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:16.479 [2024-11-26 23:10:55.584190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.479 [2024-11-26 23:10:55.584197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.479 [2024-11-26 23:10:55.584204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:16.479 [2024-11-26 23:10:55.584211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:16.479 [2024-11-26 23:10:55.584217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:16.479 [2024-11-26 23:10:55.584224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:16.479 [2024-11-26 23:10:55.584230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:16.479 [2024-11-26 23:10:55.584237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:16.479 [2024-11-26 23:10:55.584245] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:16.479 [2024-11-26 23:10:55.584257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.479 [2024-11-26 23:10:55.584264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:16.479 [2024-11-26 23:10:55.584272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:16.479 [2024-11-26 23:10:55.584280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:16.479 [2024-11-26 23:10:55.584289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:16.479 [2024-11-26 23:10:55.584309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:16.479 [2024-11-26 23:10:55.584315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:16.479 [2024-11-26 23:10:55.584322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:16.479 [2024-11-26 23:10:55.584328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:16.479 [2024-11-26 23:10:55.584335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:16.479 [2024-11-26 23:10:55.584342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:16.479 [2024-11-26 23:10:55.584350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:16.479 [2024-11-26 23:10:55.584357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:16.479 [2024-11-26 23:10:55.584364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:16.479 [2024-11-26 23:10:55.584372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:16.479 [2024-11-26 23:10:55.584380] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:16.479 [2024-11-26 23:10:55.584388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.479 [2024-11-26 23:10:55.584395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:16.479 [2024-11-26 23:10:55.584402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:16.479 [2024-11-26 23:10:55.584410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:16.479 [2024-11-26 23:10:55.584419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:16.479 [2024-11-26 23:10:55.584427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.479 [2024-11-26 23:10:55.584435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:16.479 [2024-11-26 23:10:55.584442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:26:16.479 [2024-11-26 23:10:55.584453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.479 [2024-11-26 23:10:55.596455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.479 [2024-11-26 23:10:55.596490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.479 [2024-11-26 23:10:55.596501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.947 ms 00:26:16.479 [2024-11-26 23:10:55.596512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.479 [2024-11-26 23:10:55.596593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.479 [2024-11-26 23:10:55.596607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:16.479 [2024-11-26 23:10:55.596615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:16.479 [2024-11-26 23:10:55.596624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.624426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.624529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.739 [2024-11-26 23:10:55.624580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.748 ms 00:26:16.739 [2024-11-26 23:10:55.624608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.624728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.624761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.739 [2024-11-26 23:10:55.624789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:16.739 [2024-11-26 23:10:55.624823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.625600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.625722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.739 [2024-11-26 23:10:55.625752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:26:16.739 [2024-11-26 23:10:55.625786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.626142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.626189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.739 [2024-11-26 23:10:55.626213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:26:16.739 [2024-11-26 23:10:55.626235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.633144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.633177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.739 [2024-11-26 23:10:55.633187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:26:16.739 [2024-11-26 23:10:55.633195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.636489] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:16.739 [2024-11-26 23:10:55.636524] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:16.739 [2024-11-26 23:10:55.636536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.636545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:16.739 [2024-11-26 23:10:55.636554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:26:16.739 [2024-11-26 23:10:55.636562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.651356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.651393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:16.739 [2024-11-26 23:10:55.651404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.754 ms 00:26:16.739 [2024-11-26 23:10:55.651412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.653372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.653404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:16.739 [2024-11-26 23:10:55.653414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.910 ms 00:26:16.739 [2024-11-26 23:10:55.653421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.654919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.654950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:16.739 [2024-11-26 23:10:55.654959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:26:16.739 [2024-11-26 23:10:55.654971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.655323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.655341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:16.739 [2024-11-26 23:10:55.655355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:26:16.739 [2024-11-26 23:10:55.655362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.674551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.674596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:16.739 [2024-11-26 23:10:55.674610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.171 ms 00:26:16.739 [2024-11-26 23:10:55.674618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.682413] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:16.739 [2024-11-26 23:10:55.685764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.685797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:16.739 [2024-11-26 23:10:55.685817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.085 ms 00:26:16.739 [2024-11-26 23:10:55.685826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.685900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.685914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:16.739 [2024-11-26 23:10:55.685923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:16.739 [2024-11-26 23:10:55.685931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.686013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.686024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:16.739 [2024-11-26 23:10:55.686033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:16.739 [2024-11-26 23:10:55.686040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.686062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.686071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:16.739 [2024-11-26 23:10:55.686083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:16.739 [2024-11-26 23:10:55.686090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.686128] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:16.739 [2024-11-26 23:10:55.686139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.686148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:16.739 [2024-11-26 23:10:55.686156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:16.739 [2024-11-26 23:10:55.686163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.691011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.691045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:16.739 [2024-11-26 23:10:55.691056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:26:16.739 [2024-11-26 23:10:55.691071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.691152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.739 [2024-11-26 23:10:55.691163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:16.739 [2024-11-26 23:10:55.691172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:16.739 [2024-11-26 23:10:55.691182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.739 [2024-11-26 23:10:55.692287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.851 ms, result 0 00:26:17.681  [2024-11-26T23:10:57.751Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-26T23:10:59.135Z] Copying: 36/1024 [MB] (16 MBps) [2024-11-26T23:10:59.703Z] Copying: 62/1024 [MB] (26 MBps) [2024-11-26T23:11:01.086Z] Copying: 87/1024 [MB] (24 MBps) [2024-11-26T23:11:02.029Z] Copying: 107/1024 [MB] (20 MBps) [2024-11-26T23:11:02.973Z] Copying: 126/1024 [MB] (18 MBps) [2024-11-26T23:11:03.949Z] Copying: 141/1024 [MB] (14 MBps) [2024-11-26T23:11:04.892Z] Copying: 157/1024 [MB] (16 MBps) [2024-11-26T23:11:05.839Z] Copying: 175/1024 [MB] (18 MBps) [2024-11-26T23:11:06.783Z] Copying: 190/1024 [MB] (15 MBps) [2024-11-26T23:11:07.755Z] Copying: 206/1024 [MB] (15 MBps) [2024-11-26T23:11:08.711Z] Copying: 222/1024 [MB] (15 MBps) [2024-11-26T23:11:10.114Z] Copying: 242/1024 [MB] (20 MBps) [2024-11-26T23:11:11.058Z] Copying: 257/1024 [MB] (14 MBps) [2024-11-26T23:11:12.000Z] Copying: 273464/1048576 [kB] (10172 kBps) [2024-11-26T23:11:12.935Z] Copying: 281/1024 [MB] (14 MBps) [2024-11-26T23:11:13.869Z] Copying: 319/1024 [MB] (38 MBps) [2024-11-26T23:11:14.805Z] Copying: 352/1024 [MB] (32 MBps) [2024-11-26T23:11:15.751Z] Copying: 377/1024 [MB] (24 MBps) [2024-11-26T23:11:17.137Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-26T23:11:17.711Z] Copying: 406/1024 [MB] (18 MBps) [2024-11-26T23:11:19.096Z] Copying: 420/1024 [MB] (13 MBps) [2024-11-26T23:11:20.040Z] Copying: 433/1024 [MB] (13 MBps) [2024-11-26T23:11:20.976Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-26T23:11:21.909Z] Copying: 464/1024 [MB] (21 MBps) [2024-11-26T23:11:22.857Z] Copying: 496/1024 [MB] (32 MBps) [2024-11-26T23:11:23.895Z] Copying: 520/1024 [MB] (23 MBps) [2024-11-26T23:11:24.834Z] Copying: 535/1024 [MB] (15 MBps) [2024-11-26T23:11:25.770Z] Copying: 550/1024 [MB] (14 MBps) [2024-11-26T23:11:26.712Z] Copying: 566/1024 [MB] (15 MBps) [2024-11-26T23:11:28.097Z] Copying: 583/1024 [MB] (16 MBps) [2024-11-26T23:11:29.041Z] Copying: 599/1024 [MB] (15 MBps) [2024-11-26T23:11:30.005Z] Copying: 612/1024 [MB] (13 MBps) [2024-11-26T23:11:30.945Z] Copying: 627/1024 [MB] (14 MBps) [2024-11-26T23:11:31.911Z] Copying: 649/1024 [MB] (22 MBps) [2024-11-26T23:11:32.842Z] Copying: 659/1024 [MB] (10 MBps) [2024-11-26T23:11:33.776Z] Copying: 695/1024 [MB] (35 MBps) [2024-11-26T23:11:34.719Z] Copying: 727/1024 [MB] (31 MBps) [2024-11-26T23:11:36.103Z] Copying: 739/1024 [MB] (12 MBps) [2024-11-26T23:11:37.052Z] Copying: 755/1024 [MB] (15 MBps) [2024-11-26T23:11:37.998Z] Copying: 779/1024 [MB] (24 MBps) [2024-11-26T23:11:38.941Z] Copying: 801/1024 [MB] (21 MBps) [2024-11-26T23:11:39.887Z] Copying: 815/1024 [MB] (14 MBps) [2024-11-26T23:11:40.820Z] Copying: 842/1024 [MB] (27 MBps) [2024-11-26T23:11:41.753Z] Copying: 878/1024 [MB] (35 MBps) [2024-11-26T23:11:43.128Z] Copying: 910/1024 [MB] (32 MBps) [2024-11-26T23:11:44.060Z] Copying: 944/1024 [MB] (34 MBps) [2024-11-26T23:11:44.993Z] Copying: 988/1024 [MB] (43 MBps) [2024-11-26T23:11:45.933Z] Copying: 1017/1024 [MB] (29 MBps) [2024-11-26T23:11:45.933Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-26 23:11:45.662221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.806 [2024-11-26 23:11:45.662303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:06.806 [2024-11-26 23:11:45.662329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:06.806 [2024-11-26 23:11:45.662337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.806 [2024-11-26 23:11:45.664959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:06.806 [2024-11-26 23:11:45.666058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.806 [2024-11-26 23:11:45.666090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:06.806 [2024-11-26 23:11:45.666099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:27:06.806 [2024-11-26 23:11:45.666105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.806 [2024-11-26 23:11:45.674319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.806 [2024-11-26 23:11:45.674346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:06.806 [2024-11-26 23:11:45.674354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.791 ms 00:27:06.806 [2024-11-26 23:11:45.674365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.806 [2024-11-26 23:11:45.690772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.806 [2024-11-26 23:11:45.690802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:06.806 [2024-11-26 23:11:45.690811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.393 ms 00:27:06.806 [2024-11-26 23:11:45.690818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.806 [2024-11-26 23:11:45.695694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.806 [2024-11-26 23:11:45.695720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:06.806 [2024-11-26 23:11:45.695728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:27:06.806 [2024-11-26 23:11:45.695739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.806 [2024-11-26 23:11:45.696840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.806 [2024-11-26 23:11:45.696868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:06.806 [2024-11-26 23:11:45.696875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:27:06.806 [2024-11-26 23:11:45.696881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.807 [2024-11-26 23:11:45.700321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.807 [2024-11-26 23:11:45.700348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:06.807 [2024-11-26 23:11:45.700357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:27:06.807 [2024-11-26 23:11:45.700363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.807 [2024-11-26 23:11:45.756711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.807 [2024-11-26 23:11:45.756749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:06.807 [2024-11-26 23:11:45.756757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.321 ms 00:27:06.807 [2024-11-26 23:11:45.756766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.807 [2024-11-26 23:11:45.758274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.807 [2024-11-26 23:11:45.758313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:06.807 [2024-11-26 23:11:45.758321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:27:06.807 [2024-11-26 23:11:45.758327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.807 [2024-11-26 23:11:45.759366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.807 [2024-11-26 23:11:45.759392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:06.807 [2024-11-26 23:11:45.759400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:27:06.807 [2024-11-26 23:11:45.759405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.807 [2024-11-26 23:11:45.760186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.807 [2024-11-26 23:11:45.760215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:06.807 [2024-11-26 23:11:45.760222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:27:06.807 [2024-11-26 23:11:45.760228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.807 [2024-11-26 23:11:45.760980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.807 [2024-11-26 23:11:45.761008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:06.807 [2024-11-26 23:11:45.761015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:27:06.807 [2024-11-26 23:11:45.761020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.807 [2024-11-26 23:11:45.761042] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:06.807 [2024-11-26 23:11:45.761058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 112128 / 261120 wr_cnt: 1 state: open 00:27:06.807 [2024-11-26 23:11:45.761067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:06.807 [2024-11-26 23:11:45.761673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:06.808 [2024-11-26 23:11:45.761679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:06.808 [2024-11-26 23:11:45.761692] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:06.808 [2024-11-26 23:11:45.761698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 16fdd1ae-09cf-4996-8826-278dc9b2a035 00:27:06.808 [2024-11-26 23:11:45.761705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 112128 00:27:06.808 [2024-11-26 23:11:45.761710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 113088 00:27:06.808 [2024-11-26 23:11:45.761716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 112128 00:27:06.808 [2024-11-26 23:11:45.761752] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:27:06.808 [2024-11-26 23:11:45.761758] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:06.808 [2024-11-26 23:11:45.761765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:06.808 [2024-11-26 23:11:45.761781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:06.808 [2024-11-26 23:11:45.761787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:06.808 [2024-11-26 23:11:45.761792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:06.808 [2024-11-26 23:11:45.761798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.808 [2024-11-26 23:11:45.761805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:06.808 [2024-11-26 23:11:45.761815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:27:06.808 [2024-11-26 23:11:45.761821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.763514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.808 [2024-11-26 23:11:45.763536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:06.808 [2024-11-26 23:11:45.763544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:27:06.808 [2024-11-26 23:11:45.763551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.763642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.808 [2024-11-26 23:11:45.763649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:06.808 [2024-11-26 23:11:45.763657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:06.808 [2024-11-26 23:11:45.763663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.769269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.769303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:06.808 [2024-11-26 23:11:45.769311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.769324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.769367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.769374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:06.808 [2024-11-26 23:11:45.769380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.769386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.769418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.769425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:06.808 [2024-11-26 23:11:45.769431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.769437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.769451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.769457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:06.808 [2024-11-26 23:11:45.769466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.769472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.779933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.779971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:06.808 [2024-11-26 23:11:45.779979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.779990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.788448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:06.808 [2024-11-26 23:11:45.788457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.788463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.788542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:06.808 [2024-11-26 23:11:45.788549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.788555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.788587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:06.808 [2024-11-26 23:11:45.788593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.788599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.788665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:06.808 [2024-11-26 23:11:45.788672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.788677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.788713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:06.808 [2024-11-26 23:11:45.788720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.788731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.788771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:06.808 [2024-11-26 23:11:45.788778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.788783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.808 [2024-11-26 23:11:45.788842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:06.808 [2024-11-26 23:11:45.788849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.808 [2024-11-26 23:11:45.788855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.808 [2024-11-26 23:11:45.788962] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 128.652 ms, result 0 00:27:07.751 00:27:07.751 00:27:07.751 23:11:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:10.299 23:11:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:10.299 [2024-11-26 23:11:49.084674] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:27:10.299 [2024-11-26 23:11:49.084813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94319 ] 00:27:10.299 [2024-11-26 23:11:49.224745] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:10.299 [2024-11-26 23:11:49.247888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.299 [2024-11-26 23:11:49.290259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.560 [2024-11-26 23:11:49.444289] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:10.560 [2024-11-26 23:11:49.444412] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:10.560 [2024-11-26 23:11:49.607864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.560 [2024-11-26 23:11:49.607925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:10.560 [2024-11-26 23:11:49.607942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:10.560 [2024-11-26 23:11:49.607952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.560 [2024-11-26 23:11:49.608025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.560 [2024-11-26 23:11:49.608037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:10.560 [2024-11-26 23:11:49.608046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:10.560 [2024-11-26 23:11:49.608065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.560 [2024-11-26 23:11:49.608090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:10.560 [2024-11-26 23:11:49.608374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:10.560 [2024-11-26 23:11:49.608394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.560 [2024-11-26 23:11:49.608411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:10.561 [2024-11-26 23:11:49.608422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:27:10.561 [2024-11-26 23:11:49.608430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.610690] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:10.561 [2024-11-26 23:11:49.615711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.615780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:10.561 [2024-11-26 23:11:49.615807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:27:10.561 [2024-11-26 23:11:49.615815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.615908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.615920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:10.561 [2024-11-26 23:11:49.615936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:10.561 [2024-11-26 23:11:49.615945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.627902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.627951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:10.561 [2024-11-26 23:11:49.627964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.907 ms 00:27:10.561 [2024-11-26 23:11:49.627977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.628088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.628102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:10.561 [2024-11-26 23:11:49.628112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:10.561 [2024-11-26 23:11:49.628121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.628189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.628200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:10.561 [2024-11-26 23:11:49.628219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:10.561 [2024-11-26 23:11:49.628226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.628251] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:10.561 [2024-11-26 23:11:49.630955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.631006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:10.561 [2024-11-26 23:11:49.631018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:27:10.561 [2024-11-26 23:11:49.631030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.631068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.631081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:10.561 [2024-11-26 23:11:49.631091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:10.561 [2024-11-26 23:11:49.631100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.631128] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:10.561 [2024-11-26 23:11:49.631155] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:10.561 [2024-11-26 23:11:49.631206] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:10.561 [2024-11-26 23:11:49.631224] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:10.561 [2024-11-26 23:11:49.631368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:10.561 [2024-11-26 23:11:49.631381] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:10.561 [2024-11-26 23:11:49.631397] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:10.561 [2024-11-26 23:11:49.631409] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631420] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:10.561 [2024-11-26 23:11:49.631438] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:10.561 [2024-11-26 23:11:49.631452] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:10.561 [2024-11-26 23:11:49.631461] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:10.561 [2024-11-26 23:11:49.631469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.631477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:10.561 [2024-11-26 23:11:49.631488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:27:10.561 [2024-11-26 23:11:49.631496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.631585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.561 [2024-11-26 23:11:49.631601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:10.561 [2024-11-26 23:11:49.631610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:10.561 [2024-11-26 23:11:49.631619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.561 [2024-11-26 23:11:49.631721] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:10.561 [2024-11-26 23:11:49.631732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:10.561 [2024-11-26 23:11:49.631741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:10.561 [2024-11-26 23:11:49.631778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:10.561 [2024-11-26 23:11:49.631820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:10.561 [2024-11-26 23:11:49.631837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:10.561 [2024-11-26 23:11:49.631845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:10.561 [2024-11-26 23:11:49.631854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:10.561 [2024-11-26 23:11:49.631862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:10.561 [2024-11-26 23:11:49.631871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:10.561 [2024-11-26 23:11:49.631879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:10.561 [2024-11-26 23:11:49.631895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:10.561 [2024-11-26 23:11:49.631918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:10.561 [2024-11-26 23:11:49.631947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:10.561 [2024-11-26 23:11:49.631969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:10.561 [2024-11-26 23:11:49.631977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.561 [2024-11-26 23:11:49.631985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:10.561 [2024-11-26 23:11:49.631994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:10.561 [2024-11-26 23:11:49.632002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.561 [2024-11-26 23:11:49.632010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:10.561 [2024-11-26 23:11:49.632018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:10.561 [2024-11-26 23:11:49.632026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:10.561 [2024-11-26 23:11:49.632033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:10.561 [2024-11-26 23:11:49.632041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:10.561 [2024-11-26 23:11:49.632048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:10.561 [2024-11-26 23:11:49.632055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:10.561 [2024-11-26 23:11:49.632065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:10.561 [2024-11-26 23:11:49.632075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.561 [2024-11-26 23:11:49.632083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:10.561 [2024-11-26 23:11:49.632091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:10.561 [2024-11-26 23:11:49.632099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.561 [2024-11-26 23:11:49.632109] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:10.561 [2024-11-26 23:11:49.632122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:10.561 [2024-11-26 23:11:49.632131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:10.561 [2024-11-26 23:11:49.632140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.561 [2024-11-26 23:11:49.632149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:10.561 [2024-11-26 23:11:49.632157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:10.561 [2024-11-26 23:11:49.632165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:10.561 [2024-11-26 23:11:49.632173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:10.561 [2024-11-26 23:11:49.632179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:10.562 [2024-11-26 23:11:49.632186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:10.562 [2024-11-26 23:11:49.632195] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:10.562 [2024-11-26 23:11:49.632211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:10.562 [2024-11-26 23:11:49.632220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:10.562 [2024-11-26 23:11:49.632228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:10.562 [2024-11-26 23:11:49.632235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:10.562 [2024-11-26 23:11:49.632243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:10.562 [2024-11-26 23:11:49.632251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:10.562 [2024-11-26 23:11:49.632258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:10.562 [2024-11-26 23:11:49.632265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:10.562 [2024-11-26 23:11:49.632272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:10.562 [2024-11-26 23:11:49.632279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:10.562 [2024-11-26 23:11:49.632286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:10.562 [2024-11-26 23:11:49.632292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:10.562 [2024-11-26 23:11:49.632323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:10.562 [2024-11-26 23:11:49.632330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:10.562 [2024-11-26 23:11:49.632338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:10.562 [2024-11-26 23:11:49.632345] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:10.562 [2024-11-26 23:11:49.632358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:10.562 [2024-11-26 23:11:49.632368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:10.562 [2024-11-26 23:11:49.632377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:10.562 [2024-11-26 23:11:49.632386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:10.562 [2024-11-26 23:11:49.632394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:10.562 [2024-11-26 23:11:49.632403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.562 [2024-11-26 23:11:49.632412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:10.562 [2024-11-26 23:11:49.632424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:27:10.562 [2024-11-26 23:11:49.632432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.562 [2024-11-26 23:11:49.652969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.562 [2024-11-26 23:11:49.653024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:10.562 [2024-11-26 23:11:49.653037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.468 ms 00:27:10.562 [2024-11-26 23:11:49.653047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.562 [2024-11-26 23:11:49.653144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.562 [2024-11-26 23:11:49.653155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:10.562 [2024-11-26 23:11:49.653165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:10.562 [2024-11-26 23:11:49.653183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.562 [2024-11-26 23:11:49.681480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.562 [2024-11-26 23:11:49.681587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:10.562 [2024-11-26 23:11:49.681612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.229 ms 00:27:10.562 [2024-11-26 23:11:49.681639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.562 [2024-11-26 23:11:49.681722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.562 [2024-11-26 23:11:49.681752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:10.562 [2024-11-26 23:11:49.681776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:10.562 [2024-11-26 23:11:49.681792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.562 [2024-11-26 23:11:49.682652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.562 [2024-11-26 23:11:49.682711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:10.562 [2024-11-26 23:11:49.682731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:27:10.562 [2024-11-26 23:11:49.682747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.562 [2024-11-26 23:11:49.683024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.562 [2024-11-26 23:11:49.683044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:10.562 [2024-11-26 23:11:49.683060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:27:10.562 [2024-11-26 23:11:49.683080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.694869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.694932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:10.825 [2024-11-26 23:11:49.694948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.753 ms 00:27:10.825 [2024-11-26 23:11:49.694965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.699536] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:10.825 [2024-11-26 23:11:49.699608] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:10.825 [2024-11-26 23:11:49.699625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.699636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:10.825 [2024-11-26 23:11:49.699647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:27:10.825 [2024-11-26 23:11:49.699656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.716087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.716141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:10.825 [2024-11-26 23:11:49.716156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.278 ms 00:27:10.825 [2024-11-26 23:11:49.716176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.719073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.719123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:10.825 [2024-11-26 23:11:49.719136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:27:10.825 [2024-11-26 23:11:49.719144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.721495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.721536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:10.825 [2024-11-26 23:11:49.721547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:27:10.825 [2024-11-26 23:11:49.721583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.721984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.722010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:10.825 [2024-11-26 23:11:49.722025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:10.825 [2024-11-26 23:11:49.722040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.751268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.751355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:10.825 [2024-11-26 23:11:49.751372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.205 ms 00:27:10.825 [2024-11-26 23:11:49.751394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.760048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:10.825 [2024-11-26 23:11:49.764078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.764124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:10.825 [2024-11-26 23:11:49.764136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.622 ms 00:27:10.825 [2024-11-26 23:11:49.764146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.764246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.764258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:10.825 [2024-11-26 23:11:49.764269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:10.825 [2024-11-26 23:11:49.764278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.766722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.766790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:10.825 [2024-11-26 23:11:49.766803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:27:10.825 [2024-11-26 23:11:49.766813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.766862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.766872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:10.825 [2024-11-26 23:11:49.766886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:10.825 [2024-11-26 23:11:49.766894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.766947] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:10.825 [2024-11-26 23:11:49.766960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.766973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:10.825 [2024-11-26 23:11:49.766982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:10.825 [2024-11-26 23:11:49.766990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.773844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.773899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:10.825 [2024-11-26 23:11:49.773911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.829 ms 00:27:10.825 [2024-11-26 23:11:49.773931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.774028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.825 [2024-11-26 23:11:49.774048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:10.825 [2024-11-26 23:11:49.774058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:10.825 [2024-11-26 23:11:49.774066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.825 [2024-11-26 23:11:49.775677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 167.223 ms, result 0 00:27:12.216  [2024-11-26T23:11:52.288Z] Copying: 968/1048576 [kB] (968 kBps) [2024-11-26T23:11:53.227Z] Copying: 3892/1048576 [kB] (2924 kBps) [2024-11-26T23:11:54.172Z] Copying: 15/1024 [MB] (11 MBps) [2024-11-26T23:11:55.150Z] Copying: 35/1024 [MB] (20 MBps) [2024-11-26T23:11:56.177Z] Copying: 66/1024 [MB] (30 MBps) [2024-11-26T23:11:57.117Z] Copying: 88/1024 [MB] (22 MBps) [2024-11-26T23:11:58.060Z] Copying: 111/1024 [MB] (23 MBps) [2024-11-26T23:11:59.016Z] Copying: 138/1024 [MB] (26 MBps) [2024-11-26T23:11:59.964Z] Copying: 167/1024 [MB] (29 MBps) [2024-11-26T23:12:01.352Z] Copying: 193/1024 [MB] (26 MBps) [2024-11-26T23:12:02.296Z] Copying: 222/1024 [MB] (28 MBps) [2024-11-26T23:12:03.229Z] Copying: 247/1024 [MB] (24 MBps) [2024-11-26T23:12:04.180Z] Copying: 286/1024 [MB] (39 MBps) [2024-11-26T23:12:05.129Z] Copying: 325/1024 [MB] (39 MBps) [2024-11-26T23:12:06.067Z] Copying: 359/1024 [MB] (33 MBps) [2024-11-26T23:12:07.007Z] Copying: 390/1024 [MB] (30 MBps) [2024-11-26T23:12:08.394Z] Copying: 411/1024 [MB] (21 MBps) [2024-11-26T23:12:08.967Z] Copying: 429/1024 [MB] (18 MBps) [2024-11-26T23:12:10.363Z] Copying: 453/1024 [MB] (23 MBps) [2024-11-26T23:12:11.304Z] Copying: 484/1024 [MB] (30 MBps) [2024-11-26T23:12:12.237Z] Copying: 516/1024 [MB] (32 MBps) [2024-11-26T23:12:13.179Z] Copying: 566/1024 [MB] (49 MBps) [2024-11-26T23:12:14.127Z] Copying: 588/1024 [MB] (22 MBps) [2024-11-26T23:12:15.070Z] Copying: 614/1024 [MB] (26 MBps) [2024-11-26T23:12:16.012Z] Copying: 636/1024 [MB] (21 MBps) [2024-11-26T23:12:17.398Z] Copying: 659/1024 [MB] (23 MBps) [2024-11-26T23:12:17.966Z] Copying: 684/1024 [MB] (24 MBps) [2024-11-26T23:12:19.356Z] Copying: 711/1024 [MB] (27 MBps) [2024-11-26T23:12:20.307Z] Copying: 737/1024 [MB] (25 MBps) [2024-11-26T23:12:21.257Z] Copying: 760/1024 [MB] (23 MBps) [2024-11-26T23:12:22.199Z] Copying: 782/1024 [MB] (21 MBps) [2024-11-26T23:12:23.144Z] Copying: 808/1024 [MB] (25 MBps) [2024-11-26T23:12:24.080Z] Copying: 834/1024 [MB] (26 MBps) [2024-11-26T23:12:25.017Z] Copying: 864/1024 [MB] (29 MBps) [2024-11-26T23:12:26.415Z] Copying: 908/1024 [MB] (44 MBps) [2024-11-26T23:12:27.014Z] Copying: 958/1024 [MB] (50 MBps) [2024-11-26T23:12:27.599Z] Copying: 1003/1024 [MB] (45 MBps) [2024-11-26T23:12:28.175Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 23:12:27.893855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.048 [2024-11-26 23:12:27.893975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:49.048 [2024-11-26 23:12:27.894006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:49.048 [2024-11-26 23:12:27.894025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.048 [2024-11-26 23:12:27.894080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:49.048 [2024-11-26 23:12:27.895027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.048 [2024-11-26 23:12:27.895094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:49.048 [2024-11-26 23:12:27.895117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:27:49.048 [2024-11-26 23:12:27.895144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.048 [2024-11-26 23:12:27.895668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.048 [2024-11-26 23:12:27.895703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:49.048 [2024-11-26 23:12:27.895722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:27:49.048 [2024-11-26 23:12:27.895739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.048 [2024-11-26 23:12:27.908861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.048 [2024-11-26 23:12:27.908926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:49.048 [2024-11-26 23:12:27.908945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.090 ms 00:27:49.048 [2024-11-26 23:12:27.908954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.048 [2024-11-26 23:12:27.915164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.048 [2024-11-26 23:12:27.915212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:49.048 [2024-11-26 23:12:27.915224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:27:49.048 [2024-11-26 23:12:27.915233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.048 [2024-11-26 23:12:27.918061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.048 [2024-11-26 23:12:27.918125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:49.048 [2024-11-26 23:12:27.918136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:27:49.048 [2024-11-26 23:12:27.918144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.048 [2024-11-26 23:12:27.923987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.048 [2024-11-26 23:12:27.924042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:49.048 [2024-11-26 23:12:27.924064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.800 ms 00:27:49.048 [2024-11-26 23:12:27.924072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.048 [2024-11-26 23:12:27.929131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.049 [2024-11-26 23:12:27.929178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:49.049 [2024-11-26 23:12:27.929191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:27:49.049 [2024-11-26 23:12:27.929199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.049 [2024-11-26 23:12:27.932795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.049 [2024-11-26 23:12:27.932843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:49.049 [2024-11-26 23:12:27.932865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.580 ms 00:27:49.049 [2024-11-26 23:12:27.932873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.049 [2024-11-26 23:12:27.936017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.049 [2024-11-26 23:12:27.936065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:49.049 [2024-11-26 23:12:27.936074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:27:49.049 [2024-11-26 23:12:27.936081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.049 [2024-11-26 23:12:27.938447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.049 [2024-11-26 23:12:27.938495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:49.049 [2024-11-26 23:12:27.938505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:27:49.049 [2024-11-26 23:12:27.938512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.049 [2024-11-26 23:12:27.940911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.049 [2024-11-26 23:12:27.940957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:49.049 [2024-11-26 23:12:27.940967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:27:49.049 [2024-11-26 23:12:27.940975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.049 [2024-11-26 23:12:27.941011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:49.049 [2024-11-26 23:12:27.941033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:49.049 [2024-11-26 23:12:27.941049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:49.049 [2024-11-26 23:12:27.941058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:49.049 [2024-11-26 23:12:27.941667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:49.050 [2024-11-26 23:12:27.941908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:49.050 [2024-11-26 23:12:27.941918] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 16fdd1ae-09cf-4996-8826-278dc9b2a035 00:27:49.050 [2024-11-26 23:12:27.941927] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:49.050 [2024-11-26 23:12:27.941941] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 152512 00:27:49.050 [2024-11-26 23:12:27.941949] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 150528 00:27:49.050 [2024-11-26 23:12:27.941960] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0132 00:27:49.050 [2024-11-26 23:12:27.941969] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:49.050 [2024-11-26 23:12:27.941981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:49.050 [2024-11-26 23:12:27.941989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:49.050 [2024-11-26 23:12:27.941997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:49.050 [2024-11-26 23:12:27.942003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:49.050 [2024-11-26 23:12:27.942011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.050 [2024-11-26 23:12:27.942020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:49.050 [2024-11-26 23:12:27.942037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:27:49.050 [2024-11-26 23:12:27.942045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.944675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.050 [2024-11-26 23:12:27.944712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:49.050 [2024-11-26 23:12:27.944724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:27:49.050 [2024-11-26 23:12:27.944732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.944859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.050 [2024-11-26 23:12:27.944868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:49.050 [2024-11-26 23:12:27.944877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:49.050 [2024-11-26 23:12:27.944888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.952914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.952967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.050 [2024-11-26 23:12:27.952977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.952987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.953059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.953069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.050 [2024-11-26 23:12:27.953078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.953087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.953152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.953163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.050 [2024-11-26 23:12:27.953172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.953180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.953202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.953214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.050 [2024-11-26 23:12:27.953222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.953234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.967720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.967771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.050 [2024-11-26 23:12:27.967783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.967801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.979383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.979433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.050 [2024-11-26 23:12:27.979445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.979454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.979512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.979523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.050 [2024-11-26 23:12:27.979532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.979541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.979589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.979601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.050 [2024-11-26 23:12:27.979618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.979626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.979699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.979714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.050 [2024-11-26 23:12:27.979723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.979731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.979761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.979779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:49.050 [2024-11-26 23:12:27.979787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.979798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.979839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.979851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.050 [2024-11-26 23:12:27.979859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.979868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.979910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.050 [2024-11-26 23:12:27.979924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.050 [2024-11-26 23:12:27.979934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.050 [2024-11-26 23:12:27.979945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.050 [2024-11-26 23:12:27.980074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.219 ms, result 0 00:27:49.311 00:27:49.311 00:27:49.311 23:12:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:51.866 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:51.866 23:12:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:51.866 [2024-11-26 23:12:30.485707] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:27:51.866 [2024-11-26 23:12:30.485833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94783 ] 00:27:51.866 [2024-11-26 23:12:30.620006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:51.866 [2024-11-26 23:12:30.646359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.866 [2024-11-26 23:12:30.671129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.866 [2024-11-26 23:12:30.787743] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:51.866 [2024-11-26 23:12:30.787837] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:51.866 [2024-11-26 23:12:30.948698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.866 [2024-11-26 23:12:30.948761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:51.866 [2024-11-26 23:12:30.948777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:51.866 [2024-11-26 23:12:30.948786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.866 [2024-11-26 23:12:30.948845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.866 [2024-11-26 23:12:30.948856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:51.866 [2024-11-26 23:12:30.948865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:51.866 [2024-11-26 23:12:30.948876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.866 [2024-11-26 23:12:30.948897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:51.866 [2024-11-26 23:12:30.949545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:51.866 [2024-11-26 23:12:30.949596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.866 [2024-11-26 23:12:30.949610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.866 [2024-11-26 23:12:30.949621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:27:51.866 [2024-11-26 23:12:30.949629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.866 [2024-11-26 23:12:30.951445] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:51.866 [2024-11-26 23:12:30.955189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.866 [2024-11-26 23:12:30.955242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:51.866 [2024-11-26 23:12:30.955264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.746 ms 00:27:51.866 [2024-11-26 23:12:30.955280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.866 [2024-11-26 23:12:30.955372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.866 [2024-11-26 23:12:30.955389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:51.866 [2024-11-26 23:12:30.955399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:51.866 [2024-11-26 23:12:30.955407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.866 [2024-11-26 23:12:30.963362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.866 [2024-11-26 23:12:30.963409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.866 [2024-11-26 23:12:30.963419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.913 ms 00:27:51.866 [2024-11-26 23:12:30.963435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.866 [2024-11-26 23:12:30.963537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.866 [2024-11-26 23:12:30.963550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.866 [2024-11-26 23:12:30.963562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:51.866 [2024-11-26 23:12:30.963571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.866 [2024-11-26 23:12:30.963634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.867 [2024-11-26 23:12:30.963646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:51.867 [2024-11-26 23:12:30.963662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:51.867 [2024-11-26 23:12:30.963673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.867 [2024-11-26 23:12:30.963696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:51.867 [2024-11-26 23:12:30.965818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.867 [2024-11-26 23:12:30.965865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.867 [2024-11-26 23:12:30.965875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:27:51.867 [2024-11-26 23:12:30.965883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.867 [2024-11-26 23:12:30.965920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.867 [2024-11-26 23:12:30.965933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:51.867 [2024-11-26 23:12:30.965945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:51.867 [2024-11-26 23:12:30.965953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.867 [2024-11-26 23:12:30.965978] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:51.867 [2024-11-26 23:12:30.966004] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:51.867 [2024-11-26 23:12:30.966046] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:51.867 [2024-11-26 23:12:30.966066] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:51.867 [2024-11-26 23:12:30.966173] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:51.867 [2024-11-26 23:12:30.966187] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:51.867 [2024-11-26 23:12:30.966198] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:51.867 [2024-11-26 23:12:30.966212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966222] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966238] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:51.867 [2024-11-26 23:12:30.966248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:51.867 [2024-11-26 23:12:30.966255] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:51.867 [2024-11-26 23:12:30.966263] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:51.867 [2024-11-26 23:12:30.966274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.867 [2024-11-26 23:12:30.966283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:51.867 [2024-11-26 23:12:30.966309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:27:51.867 [2024-11-26 23:12:30.966322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.867 [2024-11-26 23:12:30.966406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.867 [2024-11-26 23:12:30.966424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:51.867 [2024-11-26 23:12:30.966436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:51.867 [2024-11-26 23:12:30.966443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.867 [2024-11-26 23:12:30.966541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:51.867 [2024-11-26 23:12:30.966553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:51.867 [2024-11-26 23:12:30.966563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:51.867 [2024-11-26 23:12:30.966592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:51.867 [2024-11-26 23:12:30.966625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.867 [2024-11-26 23:12:30.966641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:51.867 [2024-11-26 23:12:30.966649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:51.867 [2024-11-26 23:12:30.966657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.867 [2024-11-26 23:12:30.966665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:51.867 [2024-11-26 23:12:30.966673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:51.867 [2024-11-26 23:12:30.966681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:51.867 [2024-11-26 23:12:30.966698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:51.867 [2024-11-26 23:12:30.966722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:51.867 [2024-11-26 23:12:30.966746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:51.867 [2024-11-26 23:12:30.966775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:51.867 [2024-11-26 23:12:30.966801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:51.867 [2024-11-26 23:12:30.966826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.867 [2024-11-26 23:12:30.966841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:51.867 [2024-11-26 23:12:30.966848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:51.867 [2024-11-26 23:12:30.966855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.867 [2024-11-26 23:12:30.966862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:51.867 [2024-11-26 23:12:30.966870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:51.867 [2024-11-26 23:12:30.966877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:51.867 [2024-11-26 23:12:30.966895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:51.867 [2024-11-26 23:12:30.966902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:51.867 [2024-11-26 23:12:30.966917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:51.867 [2024-11-26 23:12:30.966925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.867 [2024-11-26 23:12:30.966941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:51.867 [2024-11-26 23:12:30.966948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:51.867 [2024-11-26 23:12:30.966958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:51.867 [2024-11-26 23:12:30.966965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:51.867 [2024-11-26 23:12:30.966972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:51.867 [2024-11-26 23:12:30.966979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:51.867 [2024-11-26 23:12:30.966990] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:51.867 [2024-11-26 23:12:30.967004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.867 [2024-11-26 23:12:30.967014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:51.867 [2024-11-26 23:12:30.967024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:51.867 [2024-11-26 23:12:30.967036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:51.867 [2024-11-26 23:12:30.967044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:51.867 [2024-11-26 23:12:30.967050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:51.868 [2024-11-26 23:12:30.967058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:51.868 [2024-11-26 23:12:30.967066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:51.868 [2024-11-26 23:12:30.967074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:51.868 [2024-11-26 23:12:30.967081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:51.868 [2024-11-26 23:12:30.967088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:51.868 [2024-11-26 23:12:30.967096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:51.868 [2024-11-26 23:12:30.967104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:51.868 [2024-11-26 23:12:30.967110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:51.868 [2024-11-26 23:12:30.967122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:51.868 [2024-11-26 23:12:30.967130] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:51.868 [2024-11-26 23:12:30.967139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.868 [2024-11-26 23:12:30.967152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:51.868 [2024-11-26 23:12:30.967167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:51.868 [2024-11-26 23:12:30.967179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:51.868 [2024-11-26 23:12:30.967186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:51.868 [2024-11-26 23:12:30.967193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.868 [2024-11-26 23:12:30.967200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:51.868 [2024-11-26 23:12:30.967211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:27:51.868 [2024-11-26 23:12:30.967221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.868 [2024-11-26 23:12:30.981950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.868 [2024-11-26 23:12:30.981988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:51.868 [2024-11-26 23:12:30.981999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.685 ms 00:27:51.868 [2024-11-26 23:12:30.982007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.868 [2024-11-26 23:12:30.982093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.868 [2024-11-26 23:12:30.982102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:51.868 [2024-11-26 23:12:30.982117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:51.868 [2024-11-26 23:12:30.982124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.003645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.003716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:52.130 [2024-11-26 23:12:31.003733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.459 ms 00:27:52.130 [2024-11-26 23:12:31.003745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.003804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.003827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:52.130 [2024-11-26 23:12:31.003842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:52.130 [2024-11-26 23:12:31.003859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.004488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.004526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:52.130 [2024-11-26 23:12:31.004542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:27:52.130 [2024-11-26 23:12:31.004553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.004751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.004774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:52.130 [2024-11-26 23:12:31.004786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:27:52.130 [2024-11-26 23:12:31.004797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.013065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.013114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:52.130 [2024-11-26 23:12:31.013125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.236 ms 00:27:52.130 [2024-11-26 23:12:31.013146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.016861] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:52.130 [2024-11-26 23:12:31.016911] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:52.130 [2024-11-26 23:12:31.016929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.016939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:52.130 [2024-11-26 23:12:31.016949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:27:52.130 [2024-11-26 23:12:31.016956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.033052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.033105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:52.130 [2024-11-26 23:12:31.033117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.042 ms 00:27:52.130 [2024-11-26 23:12:31.033126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.035744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.035792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:52.130 [2024-11-26 23:12:31.035803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:27:52.130 [2024-11-26 23:12:31.035810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.038047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.038107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:52.130 [2024-11-26 23:12:31.038117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:27:52.130 [2024-11-26 23:12:31.038125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.038500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.038535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:52.130 [2024-11-26 23:12:31.038545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:27:52.130 [2024-11-26 23:12:31.038557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.064829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.064889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:52.130 [2024-11-26 23:12:31.064902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.251 ms 00:27:52.130 [2024-11-26 23:12:31.064910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.073184] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:52.130 [2024-11-26 23:12:31.076601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.076641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:52.130 [2024-11-26 23:12:31.076653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.638 ms 00:27:52.130 [2024-11-26 23:12:31.076661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.076744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.076755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:52.130 [2024-11-26 23:12:31.076770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:52.130 [2024-11-26 23:12:31.076778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.077608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.077645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:52.130 [2024-11-26 23:12:31.077656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:27:52.130 [2024-11-26 23:12:31.077669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.130 [2024-11-26 23:12:31.077696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.130 [2024-11-26 23:12:31.077705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:52.130 [2024-11-26 23:12:31.077714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:52.131 [2024-11-26 23:12:31.077723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.131 [2024-11-26 23:12:31.077764] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:52.131 [2024-11-26 23:12:31.077775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.131 [2024-11-26 23:12:31.077786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:52.131 [2024-11-26 23:12:31.077798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:52.131 [2024-11-26 23:12:31.077806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.131 [2024-11-26 23:12:31.083593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.131 [2024-11-26 23:12:31.083644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:52.131 [2024-11-26 23:12:31.083656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.768 ms 00:27:52.131 [2024-11-26 23:12:31.083665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.131 [2024-11-26 23:12:31.083757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.131 [2024-11-26 23:12:31.083768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:52.131 [2024-11-26 23:12:31.083785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:52.131 [2024-11-26 23:12:31.083797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.131 [2024-11-26 23:12:31.084954] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.808 ms, result 0 00:27:53.519  [2024-11-26T23:12:33.592Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-26T23:12:34.537Z] Copying: 37/1024 [MB] (17 MBps) [2024-11-26T23:12:35.481Z] Copying: 58/1024 [MB] (20 MBps) [2024-11-26T23:12:36.424Z] Copying: 80/1024 [MB] (22 MBps) [2024-11-26T23:12:37.370Z] Copying: 93/1024 [MB] (13 MBps) [2024-11-26T23:12:38.313Z] Copying: 107/1024 [MB] (13 MBps) [2024-11-26T23:12:39.699Z] Copying: 118/1024 [MB] (10 MBps) [2024-11-26T23:12:40.271Z] Copying: 130/1024 [MB] (12 MBps) [2024-11-26T23:12:41.673Z] Copying: 147/1024 [MB] (16 MBps) [2024-11-26T23:12:42.634Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-26T23:12:43.577Z] Copying: 168/1024 [MB] (10 MBps) [2024-11-26T23:12:44.520Z] Copying: 181/1024 [MB] (12 MBps) [2024-11-26T23:12:45.464Z] Copying: 194/1024 [MB] (12 MBps) [2024-11-26T23:12:46.406Z] Copying: 204/1024 [MB] (10 MBps) [2024-11-26T23:12:47.349Z] Copying: 215/1024 [MB] (11 MBps) [2024-11-26T23:12:48.294Z] Copying: 228/1024 [MB] (12 MBps) [2024-11-26T23:12:49.692Z] Copying: 238/1024 [MB] (10 MBps) [2024-11-26T23:12:50.266Z] Copying: 249/1024 [MB] (10 MBps) [2024-11-26T23:12:51.653Z] Copying: 261/1024 [MB] (11 MBps) [2024-11-26T23:12:52.597Z] Copying: 272/1024 [MB] (11 MBps) [2024-11-26T23:12:53.537Z] Copying: 284/1024 [MB] (11 MBps) [2024-11-26T23:12:54.483Z] Copying: 301/1024 [MB] (16 MBps) [2024-11-26T23:12:55.429Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-26T23:12:56.375Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-26T23:12:57.321Z] Copying: 334/1024 [MB] (12 MBps) [2024-11-26T23:12:58.343Z] Copying: 347/1024 [MB] (13 MBps) [2024-11-26T23:12:59.340Z] Copying: 359/1024 [MB] (12 MBps) [2024-11-26T23:13:00.281Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-26T23:13:01.664Z] Copying: 389/1024 [MB] (17 MBps) [2024-11-26T23:13:02.605Z] Copying: 399/1024 [MB] (10 MBps) [2024-11-26T23:13:03.546Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-26T23:13:04.492Z] Copying: 421/1024 [MB] (10 MBps) [2024-11-26T23:13:05.438Z] Copying: 431/1024 [MB] (10 MBps) [2024-11-26T23:13:06.382Z] Copying: 443/1024 [MB] (11 MBps) [2024-11-26T23:13:07.326Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-26T23:13:08.269Z] Copying: 464/1024 [MB] (10 MBps) [2024-11-26T23:13:09.666Z] Copying: 475/1024 [MB] (11 MBps) [2024-11-26T23:13:10.621Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-26T23:13:11.566Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-26T23:13:12.511Z] Copying: 507/1024 [MB] (10 MBps) [2024-11-26T23:13:13.454Z] Copying: 518/1024 [MB] (11 MBps) [2024-11-26T23:13:14.413Z] Copying: 529/1024 [MB] (11 MBps) [2024-11-26T23:13:15.359Z] Copying: 540/1024 [MB] (10 MBps) [2024-11-26T23:13:16.303Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-26T23:13:17.689Z] Copying: 561/1024 [MB] (10 MBps) [2024-11-26T23:13:18.264Z] Copying: 572/1024 [MB] (11 MBps) [2024-11-26T23:13:19.649Z] Copying: 583/1024 [MB] (11 MBps) [2024-11-26T23:13:20.602Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-26T23:13:21.546Z] Copying: 607/1024 [MB] (12 MBps) [2024-11-26T23:13:22.491Z] Copying: 618/1024 [MB] (10 MBps) [2024-11-26T23:13:23.449Z] Copying: 631/1024 [MB] (12 MBps) [2024-11-26T23:13:24.395Z] Copying: 647/1024 [MB] (16 MBps) [2024-11-26T23:13:25.358Z] Copying: 666/1024 [MB] (18 MBps) [2024-11-26T23:13:26.301Z] Copying: 690/1024 [MB] (24 MBps) [2024-11-26T23:13:27.687Z] Copying: 709/1024 [MB] (19 MBps) [2024-11-26T23:13:28.629Z] Copying: 723/1024 [MB] (14 MBps) [2024-11-26T23:13:29.570Z] Copying: 736/1024 [MB] (12 MBps) [2024-11-26T23:13:30.585Z] Copying: 750/1024 [MB] (14 MBps) [2024-11-26T23:13:31.545Z] Copying: 761/1024 [MB] (10 MBps) [2024-11-26T23:13:32.481Z] Copying: 771/1024 [MB] (10 MBps) [2024-11-26T23:13:33.431Z] Copying: 793/1024 [MB] (22 MBps) [2024-11-26T23:13:34.376Z] Copying: 807/1024 [MB] (13 MBps) [2024-11-26T23:13:35.318Z] Copying: 819/1024 [MB] (11 MBps) [2024-11-26T23:13:36.703Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-26T23:13:37.276Z] Copying: 841/1024 [MB] (12 MBps) [2024-11-26T23:13:38.664Z] Copying: 854/1024 [MB] (13 MBps) [2024-11-26T23:13:39.604Z] Copying: 869/1024 [MB] (14 MBps) [2024-11-26T23:13:40.547Z] Copying: 886/1024 [MB] (17 MBps) [2024-11-26T23:13:41.494Z] Copying: 903/1024 [MB] (16 MBps) [2024-11-26T23:13:42.439Z] Copying: 920/1024 [MB] (16 MBps) [2024-11-26T23:13:43.381Z] Copying: 936/1024 [MB] (16 MBps) [2024-11-26T23:13:44.324Z] Copying: 953/1024 [MB] (16 MBps) [2024-11-26T23:13:45.268Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-26T23:13:46.652Z] Copying: 974/1024 [MB] (11 MBps) [2024-11-26T23:13:47.595Z] Copying: 987/1024 [MB] (12 MBps) [2024-11-26T23:13:48.162Z] Copying: 1002/1024 [MB] (15 MBps) [2024-11-26T23:13:48.427Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-26 23:13:48.309830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.310725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:09.300 [2024-11-26 23:13:48.310753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:09.300 [2024-11-26 23:13:48.310767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.310799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:09.300 [2024-11-26 23:13:48.311793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.311833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:09.300 [2024-11-26 23:13:48.311847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:29:09.300 [2024-11-26 23:13:48.311908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.312172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.312183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:09.300 [2024-11-26 23:13:48.312198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:29:09.300 [2024-11-26 23:13:48.312208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.316937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.316969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:09.300 [2024-11-26 23:13:48.316984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.712 ms 00:29:09.300 [2024-11-26 23:13:48.316996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.326537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.326574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:09.300 [2024-11-26 23:13:48.326588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.517 ms 00:29:09.300 [2024-11-26 23:13:48.326604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.329706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.329754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:09.300 [2024-11-26 23:13:48.329767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:29:09.300 [2024-11-26 23:13:48.329776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.335628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.335676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:09.300 [2024-11-26 23:13:48.335687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.807 ms 00:29:09.300 [2024-11-26 23:13:48.335697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.340532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.340576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:09.300 [2024-11-26 23:13:48.340591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:29:09.300 [2024-11-26 23:13:48.340613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.344143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.344207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:09.300 [2024-11-26 23:13:48.344219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 00:29:09.300 [2024-11-26 23:13:48.344226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.347125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.347169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:09.300 [2024-11-26 23:13:48.347180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.856 ms 00:29:09.300 [2024-11-26 23:13:48.347189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.349751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.349792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:09.300 [2024-11-26 23:13:48.349802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:29:09.300 [2024-11-26 23:13:48.349810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.352368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.300 [2024-11-26 23:13:48.352408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:09.300 [2024-11-26 23:13:48.352418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:29:09.300 [2024-11-26 23:13:48.352428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.300 [2024-11-26 23:13:48.352468] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:09.300 [2024-11-26 23:13:48.352485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:09.300 [2024-11-26 23:13:48.352498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:09.300 [2024-11-26 23:13:48.352507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:09.300 [2024-11-26 23:13:48.352848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.352992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:09.301 [2024-11-26 23:13:48.353320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:09.301 [2024-11-26 23:13:48.353330] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 16fdd1ae-09cf-4996-8826-278dc9b2a035 00:29:09.301 [2024-11-26 23:13:48.353339] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:09.301 [2024-11-26 23:13:48.353531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:09.301 [2024-11-26 23:13:48.353543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:09.301 [2024-11-26 23:13:48.353553] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:09.301 [2024-11-26 23:13:48.353562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:09.301 [2024-11-26 23:13:48.353571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:09.301 [2024-11-26 23:13:48.353590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:09.301 [2024-11-26 23:13:48.353598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:09.301 [2024-11-26 23:13:48.353604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:09.301 [2024-11-26 23:13:48.353612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.301 [2024-11-26 23:13:48.353634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:09.301 [2024-11-26 23:13:48.353644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:29:09.301 [2024-11-26 23:13:48.353653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.356779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.301 [2024-11-26 23:13:48.356820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:09.301 [2024-11-26 23:13:48.356832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.103 ms 00:29:09.301 [2024-11-26 23:13:48.356849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.357000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.301 [2024-11-26 23:13:48.357010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:09.301 [2024-11-26 23:13:48.357019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:29:09.301 [2024-11-26 23:13:48.357027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.367108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.301 [2024-11-26 23:13:48.367154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:09.301 [2024-11-26 23:13:48.367172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.301 [2024-11-26 23:13:48.367200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.367261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.301 [2024-11-26 23:13:48.367271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:09.301 [2024-11-26 23:13:48.367280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.301 [2024-11-26 23:13:48.367356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.367423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.301 [2024-11-26 23:13:48.367435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:09.301 [2024-11-26 23:13:48.367444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.301 [2024-11-26 23:13:48.367456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.367474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.301 [2024-11-26 23:13:48.367483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:09.301 [2024-11-26 23:13:48.367492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.301 [2024-11-26 23:13:48.367501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.386821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.301 [2024-11-26 23:13:48.386874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:09.301 [2024-11-26 23:13:48.386895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.301 [2024-11-26 23:13:48.386905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.301 [2024-11-26 23:13:48.402289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.301 [2024-11-26 23:13:48.402370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:09.301 [2024-11-26 23:13:48.402384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.302 [2024-11-26 23:13:48.402393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.302 [2024-11-26 23:13:48.402462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.302 [2024-11-26 23:13:48.402474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:09.302 [2024-11-26 23:13:48.402494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.302 [2024-11-26 23:13:48.402504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.302 [2024-11-26 23:13:48.402549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.302 [2024-11-26 23:13:48.402563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:09.302 [2024-11-26 23:13:48.402573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.302 [2024-11-26 23:13:48.402582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.302 [2024-11-26 23:13:48.402676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.302 [2024-11-26 23:13:48.402687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:09.302 [2024-11-26 23:13:48.402696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.302 [2024-11-26 23:13:48.402711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.302 [2024-11-26 23:13:48.402749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.302 [2024-11-26 23:13:48.402760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:09.302 [2024-11-26 23:13:48.402769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.302 [2024-11-26 23:13:48.402778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.302 [2024-11-26 23:13:48.402828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.302 [2024-11-26 23:13:48.402840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:09.302 [2024-11-26 23:13:48.402849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.302 [2024-11-26 23:13:48.402862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.302 [2024-11-26 23:13:48.402921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.302 [2024-11-26 23:13:48.402944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:09.302 [2024-11-26 23:13:48.402954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.302 [2024-11-26 23:13:48.402963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.302 [2024-11-26 23:13:48.403132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 93.261 ms, result 0 00:29:09.873 00:29:09.873 00:29:09.873 23:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:12.416 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:12.416 Process with pid 92923 is not found 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92923 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92923 ']' 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 92923 00:29:12.416 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92923) - No such process 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 92923 is not found' 00:29:12.416 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:12.678 Remove shared memory files 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:12.678 ************************************ 00:29:12.678 END TEST ftl_dirty_shutdown 00:29:12.678 ************************************ 00:29:12.678 00:29:12.678 real 4m11.829s 00:29:12.678 user 4m40.335s 00:29:12.678 sys 0m28.716s 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:12.678 23:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:12.678 23:13:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:12.678 23:13:51 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:12.678 23:13:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:12.678 23:13:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:12.678 ************************************ 00:29:12.678 START TEST ftl_upgrade_shutdown 00:29:12.678 ************************************ 00:29:12.678 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:12.678 * Looking for test storage... 00:29:12.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:12.678 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:12.678 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:12.678 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:12.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.940 --rc genhtml_branch_coverage=1 00:29:12.940 --rc genhtml_function_coverage=1 00:29:12.940 --rc genhtml_legend=1 00:29:12.940 --rc geninfo_all_blocks=1 00:29:12.940 --rc geninfo_unexecuted_blocks=1 00:29:12.940 00:29:12.940 ' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:12.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.940 --rc genhtml_branch_coverage=1 00:29:12.940 --rc genhtml_function_coverage=1 00:29:12.940 --rc genhtml_legend=1 00:29:12.940 --rc geninfo_all_blocks=1 00:29:12.940 --rc geninfo_unexecuted_blocks=1 00:29:12.940 00:29:12.940 ' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:12.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.940 --rc genhtml_branch_coverage=1 00:29:12.940 --rc genhtml_function_coverage=1 00:29:12.940 --rc genhtml_legend=1 00:29:12.940 --rc geninfo_all_blocks=1 00:29:12.940 --rc geninfo_unexecuted_blocks=1 00:29:12.940 00:29:12.940 ' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:12.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.940 --rc genhtml_branch_coverage=1 00:29:12.940 --rc genhtml_function_coverage=1 00:29:12.940 --rc genhtml_legend=1 00:29:12.940 --rc geninfo_all_blocks=1 00:29:12.940 --rc geninfo_unexecuted_blocks=1 00:29:12.940 00:29:12.940 ' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:12.940 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95689 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95689 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95689 ']' 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.941 23:13:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:12.941 [2024-11-26 23:13:51.946190] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:29:12.941 [2024-11-26 23:13:51.946372] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95689 ] 00:29:13.201 [2024-11-26 23:13:52.089160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:13.202 [2024-11-26 23:13:52.117387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.202 [2024-11-26 23:13:52.146067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:13.776 23:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:14.051 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:14.315 { 00:29:14.315 "name": "basen1", 00:29:14.315 "aliases": [ 00:29:14.315 "37865994-1111-4518-80de-3493122ab3cb" 00:29:14.315 ], 00:29:14.315 "product_name": "NVMe disk", 00:29:14.315 "block_size": 4096, 00:29:14.315 "num_blocks": 1310720, 00:29:14.315 "uuid": "37865994-1111-4518-80de-3493122ab3cb", 00:29:14.315 "numa_id": -1, 00:29:14.315 "assigned_rate_limits": { 00:29:14.315 "rw_ios_per_sec": 0, 00:29:14.315 "rw_mbytes_per_sec": 0, 00:29:14.315 "r_mbytes_per_sec": 0, 00:29:14.315 "w_mbytes_per_sec": 0 00:29:14.315 }, 00:29:14.315 "claimed": true, 00:29:14.315 "claim_type": "read_many_write_one", 00:29:14.315 "zoned": false, 00:29:14.315 "supported_io_types": { 00:29:14.315 "read": true, 00:29:14.315 "write": true, 00:29:14.315 "unmap": true, 00:29:14.315 "flush": true, 00:29:14.315 "reset": true, 00:29:14.315 "nvme_admin": true, 00:29:14.315 "nvme_io": true, 00:29:14.315 "nvme_io_md": false, 00:29:14.315 "write_zeroes": true, 00:29:14.315 "zcopy": false, 00:29:14.315 "get_zone_info": false, 00:29:14.315 "zone_management": false, 00:29:14.315 "zone_append": false, 00:29:14.315 "compare": true, 00:29:14.315 "compare_and_write": false, 00:29:14.315 "abort": true, 00:29:14.315 "seek_hole": false, 00:29:14.315 "seek_data": false, 00:29:14.315 "copy": true, 00:29:14.315 "nvme_iov_md": false 00:29:14.315 }, 00:29:14.315 "driver_specific": { 00:29:14.315 "nvme": [ 00:29:14.315 { 00:29:14.315 "pci_address": "0000:00:11.0", 00:29:14.315 "trid": { 00:29:14.315 "trtype": "PCIe", 00:29:14.315 "traddr": "0000:00:11.0" 00:29:14.315 }, 00:29:14.315 "ctrlr_data": { 00:29:14.315 "cntlid": 0, 00:29:14.315 "vendor_id": "0x1b36", 00:29:14.315 "model_number": "QEMU NVMe Ctrl", 00:29:14.315 "serial_number": "12341", 00:29:14.315 "firmware_revision": "8.0.0", 00:29:14.315 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:14.315 "oacs": { 00:29:14.315 "security": 0, 00:29:14.315 "format": 1, 00:29:14.315 "firmware": 0, 00:29:14.315 "ns_manage": 1 00:29:14.315 }, 00:29:14.315 "multi_ctrlr": false, 00:29:14.315 "ana_reporting": false 00:29:14.315 }, 00:29:14.315 "vs": { 00:29:14.315 "nvme_version": "1.4" 00:29:14.315 }, 00:29:14.315 "ns_data": { 00:29:14.315 "id": 1, 00:29:14.315 "can_share": false 00:29:14.315 } 00:29:14.315 } 00:29:14.315 ], 00:29:14.315 "mp_policy": "active_passive" 00:29:14.315 } 00:29:14.315 } 00:29:14.315 ]' 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:14.315 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:14.576 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e45abb66-8511-4278-9a4a-0ceb4ba2f7a7 00:29:14.576 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:14.576 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e45abb66-8511-4278-9a4a-0ceb4ba2f7a7 00:29:14.837 23:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:15.097 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c5638d42-c8f0-478a-8a50-6e040843948c 00:29:15.097 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c5638d42-c8f0-478a-8a50-6e040843948c 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=269953a8-d226-4d86-baac-8e5498b7ddbc 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 269953a8-d226-4d86-baac-8e5498b7ddbc ]] 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 269953a8-d226-4d86-baac-8e5498b7ddbc 5120 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=269953a8-d226-4d86-baac-8e5498b7ddbc 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 269953a8-d226-4d86-baac-8e5498b7ddbc 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=269953a8-d226-4d86-baac-8e5498b7ddbc 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:15.355 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 269953a8-d226-4d86-baac-8e5498b7ddbc 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:15.613 { 00:29:15.613 "name": "269953a8-d226-4d86-baac-8e5498b7ddbc", 00:29:15.613 "aliases": [ 00:29:15.613 "lvs/basen1p0" 00:29:15.613 ], 00:29:15.613 "product_name": "Logical Volume", 00:29:15.613 "block_size": 4096, 00:29:15.613 "num_blocks": 5242880, 00:29:15.613 "uuid": "269953a8-d226-4d86-baac-8e5498b7ddbc", 00:29:15.613 "assigned_rate_limits": { 00:29:15.613 "rw_ios_per_sec": 0, 00:29:15.613 "rw_mbytes_per_sec": 0, 00:29:15.613 "r_mbytes_per_sec": 0, 00:29:15.613 "w_mbytes_per_sec": 0 00:29:15.613 }, 00:29:15.613 "claimed": false, 00:29:15.613 "zoned": false, 00:29:15.613 "supported_io_types": { 00:29:15.613 "read": true, 00:29:15.613 "write": true, 00:29:15.613 "unmap": true, 00:29:15.613 "flush": false, 00:29:15.613 "reset": true, 00:29:15.613 "nvme_admin": false, 00:29:15.613 "nvme_io": false, 00:29:15.613 "nvme_io_md": false, 00:29:15.613 "write_zeroes": true, 00:29:15.613 "zcopy": false, 00:29:15.613 "get_zone_info": false, 00:29:15.613 "zone_management": false, 00:29:15.613 "zone_append": false, 00:29:15.613 "compare": false, 00:29:15.613 "compare_and_write": false, 00:29:15.613 "abort": false, 00:29:15.613 "seek_hole": true, 00:29:15.613 "seek_data": true, 00:29:15.613 "copy": false, 00:29:15.613 "nvme_iov_md": false 00:29:15.613 }, 00:29:15.613 "driver_specific": { 00:29:15.613 "lvol": { 00:29:15.613 "lvol_store_uuid": "c5638d42-c8f0-478a-8a50-6e040843948c", 00:29:15.613 "base_bdev": "basen1", 00:29:15.613 "thin_provision": true, 00:29:15.613 "num_allocated_clusters": 0, 00:29:15.613 "snapshot": false, 00:29:15.613 "clone": false, 00:29:15.613 "esnap_clone": false 00:29:15.613 } 00:29:15.613 } 00:29:15.613 } 00:29:15.613 ]' 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:15.613 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:15.870 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:15.870 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:15.870 23:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:16.129 23:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:16.129 23:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:16.129 23:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 269953a8-d226-4d86-baac-8e5498b7ddbc -c cachen1p0 --l2p_dram_limit 2 00:29:16.129 [2024-11-26 23:13:55.214614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.214656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:16.129 [2024-11-26 23:13:55.214670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:16.129 [2024-11-26 23:13:55.214677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.214731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.214740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:16.129 [2024-11-26 23:13:55.214750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:16.129 [2024-11-26 23:13:55.214756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.214773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:16.129 [2024-11-26 23:13:55.214983] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:16.129 [2024-11-26 23:13:55.214997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.215003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:16.129 [2024-11-26 23:13:55.215012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:29:16.129 [2024-11-26 23:13:55.215018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.215048] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b8d864d2-5103-4c9b-aef1-1f6d395a70b5 00:29:16.129 [2024-11-26 23:13:55.216373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.216494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:16.129 [2024-11-26 23:13:55.216509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:16.129 [2024-11-26 23:13:55.216521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.223590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.223704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:16.129 [2024-11-26 23:13:55.223717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.991 ms 00:29:16.129 [2024-11-26 23:13:55.223732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.223769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.223778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:16.129 [2024-11-26 23:13:55.223784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:16.129 [2024-11-26 23:13:55.223792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.223836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.223849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:16.129 [2024-11-26 23:13:55.223856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:16.129 [2024-11-26 23:13:55.223864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.223883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:16.129 [2024-11-26 23:13:55.225565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.225590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:16.129 [2024-11-26 23:13:55.225600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.685 ms 00:29:16.129 [2024-11-26 23:13:55.225606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.225632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.225638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:16.129 [2024-11-26 23:13:55.225648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:16.129 [2024-11-26 23:13:55.225655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.225670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:16.129 [2024-11-26 23:13:55.225784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:16.129 [2024-11-26 23:13:55.225796] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:16.129 [2024-11-26 23:13:55.225805] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:16.129 [2024-11-26 23:13:55.225819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:16.129 [2024-11-26 23:13:55.225829] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:16.129 [2024-11-26 23:13:55.225841] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:16.129 [2024-11-26 23:13:55.225847] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:16.129 [2024-11-26 23:13:55.225861] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:16.129 [2024-11-26 23:13:55.225866] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:16.129 [2024-11-26 23:13:55.225874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.225880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:16.129 [2024-11-26 23:13:55.225888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:29:16.129 [2024-11-26 23:13:55.225894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.225966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.129 [2024-11-26 23:13:55.225973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:16.129 [2024-11-26 23:13:55.225984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:29:16.129 [2024-11-26 23:13:55.225990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.129 [2024-11-26 23:13:55.226066] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:16.129 [2024-11-26 23:13:55.226077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:16.129 [2024-11-26 23:13:55.226086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:16.129 [2024-11-26 23:13:55.226095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.129 [2024-11-26 23:13:55.226103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:16.129 [2024-11-26 23:13:55.226108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:16.129 [2024-11-26 23:13:55.226114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:16.129 [2024-11-26 23:13:55.226120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:16.129 [2024-11-26 23:13:55.226127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:16.129 [2024-11-26 23:13:55.226134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.129 [2024-11-26 23:13:55.226141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:16.129 [2024-11-26 23:13:55.226147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:16.129 [2024-11-26 23:13:55.226155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.129 [2024-11-26 23:13:55.226161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:16.129 [2024-11-26 23:13:55.226169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:16.129 [2024-11-26 23:13:55.226174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.129 [2024-11-26 23:13:55.226181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:16.129 [2024-11-26 23:13:55.226185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:16.129 [2024-11-26 23:13:55.226192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.129 [2024-11-26 23:13:55.226201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:16.129 [2024-11-26 23:13:55.226208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:16.129 [2024-11-26 23:13:55.226214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.129 [2024-11-26 23:13:55.226221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:16.129 [2024-11-26 23:13:55.226227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:16.129 [2024-11-26 23:13:55.226234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.129 [2024-11-26 23:13:55.226241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:16.130 [2024-11-26 23:13:55.226248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:16.130 [2024-11-26 23:13:55.226254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.130 [2024-11-26 23:13:55.226263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:16.130 [2024-11-26 23:13:55.226269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:16.130 [2024-11-26 23:13:55.226276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.130 [2024-11-26 23:13:55.226282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:16.130 [2024-11-26 23:13:55.226289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:16.130 [2024-11-26 23:13:55.226310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.130 [2024-11-26 23:13:55.226319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:16.130 [2024-11-26 23:13:55.226325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:16.130 [2024-11-26 23:13:55.226332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.130 [2024-11-26 23:13:55.226338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:16.130 [2024-11-26 23:13:55.226346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:16.130 [2024-11-26 23:13:55.226351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.130 [2024-11-26 23:13:55.226359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:16.130 [2024-11-26 23:13:55.226366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:16.130 [2024-11-26 23:13:55.226373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.130 [2024-11-26 23:13:55.226379] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:16.130 [2024-11-26 23:13:55.226389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:16.130 [2024-11-26 23:13:55.226397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:16.130 [2024-11-26 23:13:55.226406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.130 [2024-11-26 23:13:55.226415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:16.130 [2024-11-26 23:13:55.226422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:16.130 [2024-11-26 23:13:55.226428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:16.130 [2024-11-26 23:13:55.226436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:16.130 [2024-11-26 23:13:55.226444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:16.130 [2024-11-26 23:13:55.226452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:16.130 [2024-11-26 23:13:55.226462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:16.130 [2024-11-26 23:13:55.226474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:16.130 [2024-11-26 23:13:55.226491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:16.130 [2024-11-26 23:13:55.226512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:16.130 [2024-11-26 23:13:55.226521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:16.130 [2024-11-26 23:13:55.226528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:16.130 [2024-11-26 23:13:55.226535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:16.130 [2024-11-26 23:13:55.226584] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:16.130 [2024-11-26 23:13:55.226592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:16.130 [2024-11-26 23:13:55.226606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:16.130 [2024-11-26 23:13:55.226612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:16.130 [2024-11-26 23:13:55.226619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:16.130 [2024-11-26 23:13:55.226625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.130 [2024-11-26 23:13:55.226634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:16.130 [2024-11-26 23:13:55.226641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:29:16.130 [2024-11-26 23:13:55.226648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.130 [2024-11-26 23:13:55.226679] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:16.130 [2024-11-26 23:13:55.226689] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:20.330 [2024-11-26 23:13:59.291656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.291971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:20.330 [2024-11-26 23:13:59.292000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4064.960 ms 00:29:20.330 [2024-11-26 23:13:59.292014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.310800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.311032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:20.330 [2024-11-26 23:13:59.311055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.649 ms 00:29:20.330 [2024-11-26 23:13:59.311077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.311176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.311192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:20.330 [2024-11-26 23:13:59.311203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:20.330 [2024-11-26 23:13:59.311216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.328549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.328609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:20.330 [2024-11-26 23:13:59.328626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.296 ms 00:29:20.330 [2024-11-26 23:13:59.328638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.328677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.328691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:20.330 [2024-11-26 23:13:59.328701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:20.330 [2024-11-26 23:13:59.328712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.329517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.329757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:20.330 [2024-11-26 23:13:59.329772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.747 ms 00:29:20.330 [2024-11-26 23:13:59.329792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.329845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.329857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:20.330 [2024-11-26 23:13:59.329867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:20.330 [2024-11-26 23:13:59.329880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.341502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.341553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:20.330 [2024-11-26 23:13:59.341567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.599 ms 00:29:20.330 [2024-11-26 23:13:59.341581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.366519] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:20.330 [2024-11-26 23:13:59.368388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.368432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:20.330 [2024-11-26 23:13:59.368449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.707 ms 00:29:20.330 [2024-11-26 23:13:59.368460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.388379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.388434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:20.330 [2024-11-26 23:13:59.388454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.857 ms 00:29:20.330 [2024-11-26 23:13:59.388463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.388581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.388593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:20.330 [2024-11-26 23:13:59.388605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:29:20.330 [2024-11-26 23:13:59.388615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.330 [2024-11-26 23:13:59.394150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.330 [2024-11-26 23:13:59.394370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:20.331 [2024-11-26 23:13:59.394395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.504 ms 00:29:20.331 [2024-11-26 23:13:59.394405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.331 [2024-11-26 23:13:59.399367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.331 [2024-11-26 23:13:59.399413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:20.331 [2024-11-26 23:13:59.399427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.912 ms 00:29:20.331 [2024-11-26 23:13:59.399435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.331 [2024-11-26 23:13:59.399800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.331 [2024-11-26 23:13:59.399813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:20.331 [2024-11-26 23:13:59.399828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:29:20.331 [2024-11-26 23:13:59.399836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.331 [2024-11-26 23:13:59.444278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.331 [2024-11-26 23:13:59.444350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:20.331 [2024-11-26 23:13:59.444367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.381 ms 00:29:20.331 [2024-11-26 23:13:59.444381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.331 [2024-11-26 23:13:59.452167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.331 [2024-11-26 23:13:59.452217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:20.331 [2024-11-26 23:13:59.452233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.701 ms 00:29:20.331 [2024-11-26 23:13:59.452243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.592 [2024-11-26 23:13:59.458353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.592 [2024-11-26 23:13:59.458397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:20.592 [2024-11-26 23:13:59.458412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.055 ms 00:29:20.592 [2024-11-26 23:13:59.458420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.592 [2024-11-26 23:13:59.464791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.592 [2024-11-26 23:13:59.464843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:20.592 [2024-11-26 23:13:59.464862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.318 ms 00:29:20.592 [2024-11-26 23:13:59.464870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.592 [2024-11-26 23:13:59.464927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.592 [2024-11-26 23:13:59.464937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:20.592 [2024-11-26 23:13:59.464950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:20.592 [2024-11-26 23:13:59.464959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.592 [2024-11-26 23:13:59.465051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.592 [2024-11-26 23:13:59.465063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:20.592 [2024-11-26 23:13:59.465079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:20.592 [2024-11-26 23:13:59.465088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.592 [2024-11-26 23:13:59.466888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4251.667 ms, result 0 00:29:20.592 { 00:29:20.592 "name": "ftl", 00:29:20.592 "uuid": "b8d864d2-5103-4c9b-aef1-1f6d395a70b5" 00:29:20.593 } 00:29:20.593 23:13:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:20.593 [2024-11-26 23:13:59.691855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.593 23:13:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:20.854 23:13:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:21.115 [2024-11-26 23:14:00.108389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:21.115 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:21.377 [2024-11-26 23:14:00.328861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:21.377 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:21.638 Fill FTL, iteration 1 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95816 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95816 /var/tmp/spdk.tgt.sock 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95816 ']' 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.639 23:14:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:21.900 [2024-11-26 23:14:00.800414] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:29:21.900 [2024-11-26 23:14:00.800847] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95816 ] 00:29:21.901 [2024-11-26 23:14:00.940250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:21.901 [2024-11-26 23:14:00.968079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.901 [2024-11-26 23:14:00.998785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.849 23:14:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.849 23:14:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:22.849 23:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:22.849 ftln1 00:29:22.849 23:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:22.849 23:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95816 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95816 ']' 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95816 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95816 00:29:23.110 killing process with pid 95816 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95816' 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95816 00:29:23.110 23:14:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95816 00:29:23.381 23:14:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:23.382 23:14:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:23.687 [2024-11-26 23:14:02.554715] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:29:23.687 [2024-11-26 23:14:02.554867] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95847 ] 00:29:23.687 [2024-11-26 23:14:02.691422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:23.687 [2024-11-26 23:14:02.717851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.687 [2024-11-26 23:14:02.746615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.066  [2024-11-26T23:14:05.135Z] Copying: 182/1024 [MB] (182 MBps) [2024-11-26T23:14:06.075Z] Copying: 414/1024 [MB] (232 MBps) [2024-11-26T23:14:07.013Z] Copying: 647/1024 [MB] (233 MBps) [2024-11-26T23:14:07.586Z] Copying: 890/1024 [MB] (243 MBps) [2024-11-26T23:14:07.847Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:29:28.720 00:29:28.720 Calculate MD5 checksum, iteration 1 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:28.720 23:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.720 [2024-11-26 23:14:07.781816] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:29:28.720 [2024-11-26 23:14:07.781945] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95905 ] 00:29:28.981 [2024-11-26 23:14:07.916043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:28.981 [2024-11-26 23:14:07.943451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.981 [2024-11-26 23:14:07.964462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.366  [2024-11-26T23:14:10.065Z] Copying: 617/1024 [MB] (617 MBps) [2024-11-26T23:14:10.065Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:29:30.938 00:29:30.938 23:14:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:30.938 23:14:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:32.849 Fill FTL, iteration 2 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ccceb722ec061e90045368a68ef6b1a0 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:32.849 23:14:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:32.849 [2024-11-26 23:14:11.909239] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:29:32.849 [2024-11-26 23:14:11.909949] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95950 ] 00:29:33.107 [2024-11-26 23:14:12.041172] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:33.107 [2024-11-26 23:14:12.067241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.107 [2024-11-26 23:14:12.083362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.488  [2024-11-26T23:14:14.556Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-26T23:14:15.498Z] Copying: 486/1024 [MB] (242 MBps) [2024-11-26T23:14:16.441Z] Copying: 723/1024 [MB] (237 MBps) [2024-11-26T23:14:16.702Z] Copying: 965/1024 [MB] (242 MBps) [2024-11-26T23:14:16.703Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:29:37.576 00:29:37.576 Calculate MD5 checksum, iteration 2 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:37.576 23:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:37.835 [2024-11-26 23:14:16.732725] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:29:37.835 [2024-11-26 23:14:16.732843] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96003 ] 00:29:37.835 [2024-11-26 23:14:16.865401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:37.835 [2024-11-26 23:14:16.891267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.835 [2024-11-26 23:14:16.908382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.215  [2024-11-26T23:14:18.913Z] Copying: 644/1024 [MB] (644 MBps) [2024-11-26T23:14:19.483Z] Copying: 1024/1024 [MB] (average 629 MBps) 00:29:40.356 00:29:40.356 23:14:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:40.356 23:14:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:42.268 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:42.268 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f3d75f62294b7a70410df82fa4be1118 00:29:42.268 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:42.268 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:42.268 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:42.268 [2024-11-26 23:14:21.307439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.268 [2024-11-26 23:14:21.307482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:42.268 [2024-11-26 23:14:21.307495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:42.268 [2024-11-26 23:14:21.307509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.268 [2024-11-26 23:14:21.307529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.268 [2024-11-26 23:14:21.307537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:42.268 [2024-11-26 23:14:21.307543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:42.268 [2024-11-26 23:14:21.307550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.268 [2024-11-26 23:14:21.307566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.268 [2024-11-26 23:14:21.307573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:42.268 [2024-11-26 23:14:21.307582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:42.268 [2024-11-26 23:14:21.307589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.268 [2024-11-26 23:14:21.307646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.195 ms, result 0 00:29:42.268 true 00:29:42.268 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:42.527 { 00:29:42.527 "name": "ftl", 00:29:42.527 "properties": [ 00:29:42.527 { 00:29:42.527 "name": "superblock_version", 00:29:42.527 "value": 5, 00:29:42.527 "read-only": true 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "name": "base_device", 00:29:42.527 "bands": [ 00:29:42.527 { 00:29:42.527 "id": 0, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 1, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 2, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 3, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 4, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 5, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 6, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 7, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 8, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 9, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 10, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 11, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 12, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 13, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 14, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 15, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 16, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 17, 00:29:42.527 "state": "FREE", 00:29:42.527 "validity": 0.0 00:29:42.527 } 00:29:42.527 ], 00:29:42.527 "read-only": true 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "name": "cache_device", 00:29:42.527 "type": "bdev", 00:29:42.527 "chunks": [ 00:29:42.527 { 00:29:42.527 "id": 0, 00:29:42.527 "state": "INACTIVE", 00:29:42.527 "utilization": 0.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 1, 00:29:42.527 "state": "CLOSED", 00:29:42.527 "utilization": 1.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 2, 00:29:42.527 "state": "CLOSED", 00:29:42.527 "utilization": 1.0 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 3, 00:29:42.527 "state": "OPEN", 00:29:42.527 "utilization": 0.001953125 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "id": 4, 00:29:42.527 "state": "OPEN", 00:29:42.527 "utilization": 0.0 00:29:42.527 } 00:29:42.527 ], 00:29:42.527 "read-only": true 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "name": "verbose_mode", 00:29:42.527 "value": true, 00:29:42.527 "unit": "", 00:29:42.527 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:42.527 }, 00:29:42.527 { 00:29:42.527 "name": "prep_upgrade_on_shutdown", 00:29:42.527 "value": false, 00:29:42.527 "unit": "", 00:29:42.527 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:42.527 } 00:29:42.527 ] 00:29:42.527 } 00:29:42.528 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:42.789 [2024-11-26 23:14:21.710780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.789 [2024-11-26 23:14:21.710823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:42.789 [2024-11-26 23:14:21.710834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:42.789 [2024-11-26 23:14:21.710842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.790 [2024-11-26 23:14:21.710861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.790 [2024-11-26 23:14:21.710868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:42.790 [2024-11-26 23:14:21.710875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:42.790 [2024-11-26 23:14:21.710881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.790 [2024-11-26 23:14:21.710897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.790 [2024-11-26 23:14:21.710903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:42.790 [2024-11-26 23:14:21.710910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:42.790 [2024-11-26 23:14:21.710917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.790 [2024-11-26 23:14:21.710965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.183 ms, result 0 00:29:42.790 true 00:29:42.790 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:42.790 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:42.790 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:43.050 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:43.050 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:43.050 23:14:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:43.050 [2024-11-26 23:14:22.119127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.050 [2024-11-26 23:14:22.119170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:43.050 [2024-11-26 23:14:22.119180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:43.050 [2024-11-26 23:14:22.119187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.050 [2024-11-26 23:14:22.119205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.050 [2024-11-26 23:14:22.119212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:43.050 [2024-11-26 23:14:22.119218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:43.050 [2024-11-26 23:14:22.119225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.050 [2024-11-26 23:14:22.119240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.050 [2024-11-26 23:14:22.119246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:43.050 [2024-11-26 23:14:22.119253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:43.050 [2024-11-26 23:14:22.119259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.050 [2024-11-26 23:14:22.119318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:29:43.050 true 00:29:43.050 23:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:43.308 { 00:29:43.309 "name": "ftl", 00:29:43.309 "properties": [ 00:29:43.309 { 00:29:43.309 "name": "superblock_version", 00:29:43.309 "value": 5, 00:29:43.309 "read-only": true 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "name": "base_device", 00:29:43.309 "bands": [ 00:29:43.309 { 00:29:43.309 "id": 0, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 1, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 2, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 3, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 4, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 5, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 6, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 7, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 8, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 9, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 10, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 11, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 12, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 13, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 14, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 15, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 16, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 17, 00:29:43.309 "state": "FREE", 00:29:43.309 "validity": 0.0 00:29:43.309 } 00:29:43.309 ], 00:29:43.309 "read-only": true 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "name": "cache_device", 00:29:43.309 "type": "bdev", 00:29:43.309 "chunks": [ 00:29:43.309 { 00:29:43.309 "id": 0, 00:29:43.309 "state": "INACTIVE", 00:29:43.309 "utilization": 0.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 1, 00:29:43.309 "state": "CLOSED", 00:29:43.309 "utilization": 1.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 2, 00:29:43.309 "state": "CLOSED", 00:29:43.309 "utilization": 1.0 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 3, 00:29:43.309 "state": "OPEN", 00:29:43.309 "utilization": 0.001953125 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "id": 4, 00:29:43.309 "state": "OPEN", 00:29:43.309 "utilization": 0.0 00:29:43.309 } 00:29:43.309 ], 00:29:43.309 "read-only": true 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "name": "verbose_mode", 00:29:43.309 "value": true, 00:29:43.309 "unit": "", 00:29:43.309 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:43.309 }, 00:29:43.309 { 00:29:43.309 "name": "prep_upgrade_on_shutdown", 00:29:43.309 "value": true, 00:29:43.309 "unit": "", 00:29:43.309 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:43.309 } 00:29:43.309 ] 00:29:43.309 } 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95689 ]] 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95689 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95689 ']' 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95689 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95689 00:29:43.309 killing process with pid 95689 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95689' 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95689 00:29:43.309 23:14:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95689 00:29:43.570 [2024-11-26 23:14:22.487265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:43.570 [2024-11-26 23:14:22.490646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.570 [2024-11-26 23:14:22.490679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:43.570 [2024-11-26 23:14:22.490690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:43.570 [2024-11-26 23:14:22.490701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.570 [2024-11-26 23:14:22.490720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:43.570 [2024-11-26 23:14:22.491241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.570 [2024-11-26 23:14:22.491264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:43.570 [2024-11-26 23:14:22.491272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.509 ms 00:29:43.570 [2024-11-26 23:14:22.491279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.381587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.381644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:53.599 [2024-11-26 23:14:31.381658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8890.246 ms 00:29:53.599 [2024-11-26 23:14:31.381666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.382769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.382783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:53.599 [2024-11-26 23:14:31.382791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.090 ms 00:29:53.599 [2024-11-26 23:14:31.382797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.383742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.383854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:53.599 [2024-11-26 23:14:31.383868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:29:53.599 [2024-11-26 23:14:31.383875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.385506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.385531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:53.599 [2024-11-26 23:14:31.385538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.603 ms 00:29:53.599 [2024-11-26 23:14:31.385544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.387548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.387578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:53.599 [2024-11-26 23:14:31.387586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.945 ms 00:29:53.599 [2024-11-26 23:14:31.387597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.387642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.387650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:53.599 [2024-11-26 23:14:31.387657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:53.599 [2024-11-26 23:14:31.387663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.388662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.388697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:53.599 [2024-11-26 23:14:31.388704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.986 ms 00:29:53.599 [2024-11-26 23:14:31.388709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.389725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.389762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:53.599 [2024-11-26 23:14:31.389769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.991 ms 00:29:53.599 [2024-11-26 23:14:31.389775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.390602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.390704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:53.599 [2024-11-26 23:14:31.390716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.802 ms 00:29:53.599 [2024-11-26 23:14:31.390722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.391618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.599 [2024-11-26 23:14:31.391641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:53.599 [2024-11-26 23:14:31.391649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:29:53.599 [2024-11-26 23:14:31.391654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.599 [2024-11-26 23:14:31.391678] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:53.599 [2024-11-26 23:14:31.391696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:53.599 [2024-11-26 23:14:31.391705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:53.599 [2024-11-26 23:14:31.391712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:53.599 [2024-11-26 23:14:31.391719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:53.599 [2024-11-26 23:14:31.391781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:53.600 [2024-11-26 23:14:31.391787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:53.600 [2024-11-26 23:14:31.391794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:53.600 [2024-11-26 23:14:31.391800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:53.600 [2024-11-26 23:14:31.391807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:53.600 [2024-11-26 23:14:31.391815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:53.600 [2024-11-26 23:14:31.391821] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b8d864d2-5103-4c9b-aef1-1f6d395a70b5 00:29:53.600 [2024-11-26 23:14:31.391828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:53.600 [2024-11-26 23:14:31.391836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:53.600 [2024-11-26 23:14:31.391842] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:53.600 [2024-11-26 23:14:31.391848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:53.600 [2024-11-26 23:14:31.391854] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:53.600 [2024-11-26 23:14:31.391861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:53.600 [2024-11-26 23:14:31.391867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:53.600 [2024-11-26 23:14:31.391873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:53.600 [2024-11-26 23:14:31.391878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:53.600 [2024-11-26 23:14:31.391884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.600 [2024-11-26 23:14:31.391891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:53.600 [2024-11-26 23:14:31.391897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:29:53.600 [2024-11-26 23:14:31.391904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.393675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.600 [2024-11-26 23:14:31.393702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:53.600 [2024-11-26 23:14:31.393710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.758 ms 00:29:53.600 [2024-11-26 23:14:31.393716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.393802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.600 [2024-11-26 23:14:31.393809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:53.600 [2024-11-26 23:14:31.393816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:29:53.600 [2024-11-26 23:14:31.393821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.399786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.399815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:53.600 [2024-11-26 23:14:31.399823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.399829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.399857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.399864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:53.600 [2024-11-26 23:14:31.399870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.399877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.399936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.399944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:53.600 [2024-11-26 23:14:31.399950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.399957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.399970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.399980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:53.600 [2024-11-26 23:14:31.399986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.399992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.411412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.411551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:53.600 [2024-11-26 23:14:31.411565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.411572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.420180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:53.600 [2024-11-26 23:14:31.420189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.420195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.420272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:53.600 [2024-11-26 23:14:31.420279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.420289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.420337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:53.600 [2024-11-26 23:14:31.420346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.420353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.420420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:53.600 [2024-11-26 23:14:31.420426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.420433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.420466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:53.600 [2024-11-26 23:14:31.420472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.420479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.420521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:53.600 [2024-11-26 23:14:31.420531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.420537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.600 [2024-11-26 23:14:31.420586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:53.600 [2024-11-26 23:14:31.420596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.600 [2024-11-26 23:14:31.420602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.600 [2024-11-26 23:14:31.420714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8930.011 ms, result 0 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96184 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96184 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96184 ']' 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.984 23:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:55.244 [2024-11-26 23:14:34.179436] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:29:55.244 [2024-11-26 23:14:34.179566] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96184 ] 00:29:55.244 [2024-11-26 23:14:34.316458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:55.244 [2024-11-26 23:14:34.346846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.504 [2024-11-26 23:14:34.387530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.765 [2024-11-26 23:14:34.804832] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:55.765 [2024-11-26 23:14:34.804930] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:56.025 [2024-11-26 23:14:34.958379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.025 [2024-11-26 23:14:34.958454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:56.025 [2024-11-26 23:14:34.958470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:56.025 [2024-11-26 23:14:34.958480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.025 [2024-11-26 23:14:34.958548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.025 [2024-11-26 23:14:34.958563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:56.025 [2024-11-26 23:14:34.958573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:29:56.025 [2024-11-26 23:14:34.958582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.025 [2024-11-26 23:14:34.958611] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:56.026 [2024-11-26 23:14:34.958899] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:56.026 [2024-11-26 23:14:34.958924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.958933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:56.026 [2024-11-26 23:14:34.958943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:29:56.026 [2024-11-26 23:14:34.958954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.961219] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:56.026 [2024-11-26 23:14:34.965931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.965982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:56.026 [2024-11-26 23:14:34.965996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.715 ms 00:29:56.026 [2024-11-26 23:14:34.966004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.966094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.966105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:56.026 [2024-11-26 23:14:34.966121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:56.026 [2024-11-26 23:14:34.966134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.977578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.977785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:56.026 [2024-11-26 23:14:34.977806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.378 ms 00:29:56.026 [2024-11-26 23:14:34.977825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.977882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.977893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:56.026 [2024-11-26 23:14:34.977902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:56.026 [2024-11-26 23:14:34.977911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.977994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.978009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:56.026 [2024-11-26 23:14:34.978018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:56.026 [2024-11-26 23:14:34.978026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.978055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:56.026 [2024-11-26 23:14:34.980786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.980827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:56.026 [2024-11-26 23:14:34.980837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.738 ms 00:29:56.026 [2024-11-26 23:14:34.980851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.980883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.980892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:56.026 [2024-11-26 23:14:34.980901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:56.026 [2024-11-26 23:14:34.980909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.980936] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:56.026 [2024-11-26 23:14:34.980962] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:56.026 [2024-11-26 23:14:34.981003] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:56.026 [2024-11-26 23:14:34.981032] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:56.026 [2024-11-26 23:14:34.981145] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:56.026 [2024-11-26 23:14:34.981161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:56.026 [2024-11-26 23:14:34.981172] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:56.026 [2024-11-26 23:14:34.981183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:56.026 [2024-11-26 23:14:34.981192] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:56.026 [2024-11-26 23:14:34.981200] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:56.026 [2024-11-26 23:14:34.981208] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:56.026 [2024-11-26 23:14:34.981217] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:56.026 [2024-11-26 23:14:34.981224] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:56.026 [2024-11-26 23:14:34.981236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.981244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:56.026 [2024-11-26 23:14:34.981254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:29:56.026 [2024-11-26 23:14:34.981265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.981558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.026 [2024-11-26 23:14:34.981610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:56.026 [2024-11-26 23:14:34.981633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:56.026 [2024-11-26 23:14:34.981652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.026 [2024-11-26 23:14:34.981779] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:56.026 [2024-11-26 23:14:34.981936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:56.026 [2024-11-26 23:14:34.981964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.026 [2024-11-26 23:14:34.981984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.026 [2024-11-26 23:14:34.982006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:56.026 [2024-11-26 23:14:34.982026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:56.026 [2024-11-26 23:14:34.982047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:56.026 [2024-11-26 23:14:34.982066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:56.026 [2024-11-26 23:14:34.982085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:56.026 [2024-11-26 23:14:34.982103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.026 [2024-11-26 23:14:34.982121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:56.026 [2024-11-26 23:14:34.982139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:56.026 [2024-11-26 23:14:34.982205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.026 [2024-11-26 23:14:34.982235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:56.026 [2024-11-26 23:14:34.982255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:56.026 [2024-11-26 23:14:34.982264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.026 [2024-11-26 23:14:34.982281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:56.026 [2024-11-26 23:14:34.982289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:56.026 [2024-11-26 23:14:34.982314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.026 [2024-11-26 23:14:34.982322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:56.026 [2024-11-26 23:14:34.982330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:56.026 [2024-11-26 23:14:34.982337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.026 [2024-11-26 23:14:34.982344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:56.026 [2024-11-26 23:14:34.982352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:56.026 [2024-11-26 23:14:34.982358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.026 [2024-11-26 23:14:34.982365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:56.026 [2024-11-26 23:14:34.982372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:56.026 [2024-11-26 23:14:34.982386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.027 [2024-11-26 23:14:34.982394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:56.027 [2024-11-26 23:14:34.982404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:56.027 [2024-11-26 23:14:34.982411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.027 [2024-11-26 23:14:34.982419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:56.027 [2024-11-26 23:14:34.982426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:56.027 [2024-11-26 23:14:34.982433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.027 [2024-11-26 23:14:34.982440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:56.027 [2024-11-26 23:14:34.982447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:56.027 [2024-11-26 23:14:34.982454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.027 [2024-11-26 23:14:34.982461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:56.027 [2024-11-26 23:14:34.982468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:56.027 [2024-11-26 23:14:34.982474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.027 [2024-11-26 23:14:34.982481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:56.027 [2024-11-26 23:14:34.982488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:56.027 [2024-11-26 23:14:34.982494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.027 [2024-11-26 23:14:34.982501] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:56.027 [2024-11-26 23:14:34.982510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:56.027 [2024-11-26 23:14:34.982523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.027 [2024-11-26 23:14:34.982531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.027 [2024-11-26 23:14:34.982538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:56.027 [2024-11-26 23:14:34.982546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:56.027 [2024-11-26 23:14:34.982553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:56.027 [2024-11-26 23:14:34.982559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:56.027 [2024-11-26 23:14:34.982567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:56.027 [2024-11-26 23:14:34.982574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:56.027 [2024-11-26 23:14:34.982583] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:56.027 [2024-11-26 23:14:34.982595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:56.027 [2024-11-26 23:14:34.982612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:56.027 [2024-11-26 23:14:34.982642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:56.027 [2024-11-26 23:14:34.982649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:56.027 [2024-11-26 23:14:34.982660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:56.027 [2024-11-26 23:14:34.982668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:56.027 [2024-11-26 23:14:34.982722] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:56.027 [2024-11-26 23:14:34.982736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:56.027 [2024-11-26 23:14:34.982757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:56.027 [2024-11-26 23:14:34.982765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:56.027 [2024-11-26 23:14:34.982772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:56.027 [2024-11-26 23:14:34.982780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.027 [2024-11-26 23:14:34.982790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:56.027 [2024-11-26 23:14:34.982800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:29:56.027 [2024-11-26 23:14:34.982809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.027 [2024-11-26 23:14:34.982886] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:56.027 [2024-11-26 23:14:34.982897] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:00.239 [2024-11-26 23:14:39.254668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.255038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:00.239 [2024-11-26 23:14:39.255136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4271.762 ms 00:30:00.239 [2024-11-26 23:14:39.255165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.273896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.274138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:00.239 [2024-11-26 23:14:39.274245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.564 ms 00:30:00.239 [2024-11-26 23:14:39.274272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.274428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.274460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:00.239 [2024-11-26 23:14:39.274489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:00.239 [2024-11-26 23:14:39.274510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.291808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.292009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:00.239 [2024-11-26 23:14:39.292134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.240 ms 00:30:00.239 [2024-11-26 23:14:39.292164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.292236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.292267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:00.239 [2024-11-26 23:14:39.292289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:00.239 [2024-11-26 23:14:39.292326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.293067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.293227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:00.239 [2024-11-26 23:14:39.293374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.665 ms 00:30:00.239 [2024-11-26 23:14:39.293405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.293495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.293524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:00.239 [2024-11-26 23:14:39.293551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:00.239 [2024-11-26 23:14:39.293576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.305435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.305593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:00.239 [2024-11-26 23:14:39.305658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.706 ms 00:30:00.239 [2024-11-26 23:14:39.305685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.324430] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:00.239 [2024-11-26 23:14:39.324850] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:00.239 [2024-11-26 23:14:39.325062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.325128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:00.239 [2024-11-26 23:14:39.325247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.214 ms 00:30:00.239 [2024-11-26 23:14:39.325385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.332024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.332183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:00.239 [2024-11-26 23:14:39.332246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.407 ms 00:30:00.239 [2024-11-26 23:14:39.332270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.335003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.335162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:00.239 [2024-11-26 23:14:39.335223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.631 ms 00:30:00.239 [2024-11-26 23:14:39.335245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.337757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.337904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:00.239 [2024-11-26 23:14:39.337959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.463 ms 00:30:00.239 [2024-11-26 23:14:39.337981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.239 [2024-11-26 23:14:39.338503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.239 [2024-11-26 23:14:39.338637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:00.239 [2024-11-26 23:14:39.338657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:30:00.239 [2024-11-26 23:14:39.338673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.367637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.367714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:00.500 [2024-11-26 23:14:39.367743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.935 ms 00:30:00.500 [2024-11-26 23:14:39.367753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.376171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:00.500 [2024-11-26 23:14:39.377417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.377571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:00.500 [2024-11-26 23:14:39.377591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.586 ms 00:30:00.500 [2024-11-26 23:14:39.377601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.377700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.377713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:00.500 [2024-11-26 23:14:39.377724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:00.500 [2024-11-26 23:14:39.377733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.377799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.377814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:00.500 [2024-11-26 23:14:39.377824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:00.500 [2024-11-26 23:14:39.377832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.377859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.377869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:00.500 [2024-11-26 23:14:39.377879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:00.500 [2024-11-26 23:14:39.377887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.377944] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:00.500 [2024-11-26 23:14:39.377957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.377976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:00.500 [2024-11-26 23:14:39.377989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:00.500 [2024-11-26 23:14:39.377999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.383903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.383957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:00.500 [2024-11-26 23:14:39.383972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.881 ms 00:30:00.500 [2024-11-26 23:14:39.383981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.384081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.500 [2024-11-26 23:14:39.384093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:00.500 [2024-11-26 23:14:39.384104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:30:00.500 [2024-11-26 23:14:39.384118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.500 [2024-11-26 23:14:39.386456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4427.386 ms, result 0 00:30:00.500 [2024-11-26 23:14:39.398928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.500 [2024-11-26 23:14:39.414921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:00.500 [2024-11-26 23:14:39.423117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:00.500 23:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.500 23:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:00.500 23:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:00.500 23:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:00.500 23:14:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:00.760 [2024-11-26 23:14:39.667134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.760 [2024-11-26 23:14:39.667197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:00.760 [2024-11-26 23:14:39.667214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:00.760 [2024-11-26 23:14:39.667224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.760 [2024-11-26 23:14:39.667251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.760 [2024-11-26 23:14:39.667265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:00.760 [2024-11-26 23:14:39.667276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:00.760 [2024-11-26 23:14:39.667284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.760 [2024-11-26 23:14:39.667327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.760 [2024-11-26 23:14:39.667339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:00.760 [2024-11-26 23:14:39.667348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:00.760 [2024-11-26 23:14:39.667358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.760 [2024-11-26 23:14:39.667428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.286 ms, result 0 00:30:00.760 true 00:30:00.760 23:14:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.022 { 00:30:01.022 "name": "ftl", 00:30:01.022 "properties": [ 00:30:01.022 { 00:30:01.022 "name": "superblock_version", 00:30:01.022 "value": 5, 00:30:01.022 "read-only": true 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "name": "base_device", 00:30:01.022 "bands": [ 00:30:01.022 { 00:30:01.022 "id": 0, 00:30:01.022 "state": "CLOSED", 00:30:01.022 "validity": 1.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 1, 00:30:01.022 "state": "CLOSED", 00:30:01.022 "validity": 1.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 2, 00:30:01.022 "state": "CLOSED", 00:30:01.022 "validity": 0.007843137254901933 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 3, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 4, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 5, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 6, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 7, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 8, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 9, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 10, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 11, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 12, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 13, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.022 "id": 14, 00:30:01.022 "state": "FREE", 00:30:01.022 "validity": 0.0 00:30:01.022 }, 00:30:01.022 { 00:30:01.023 "id": 15, 00:30:01.023 "state": "FREE", 00:30:01.023 "validity": 0.0 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "id": 16, 00:30:01.023 "state": "FREE", 00:30:01.023 "validity": 0.0 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "id": 17, 00:30:01.023 "state": "FREE", 00:30:01.023 "validity": 0.0 00:30:01.023 } 00:30:01.023 ], 00:30:01.023 "read-only": true 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "name": "cache_device", 00:30:01.023 "type": "bdev", 00:30:01.023 "chunks": [ 00:30:01.023 { 00:30:01.023 "id": 0, 00:30:01.023 "state": "INACTIVE", 00:30:01.023 "utilization": 0.0 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "id": 1, 00:30:01.023 "state": "OPEN", 00:30:01.023 "utilization": 0.0 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "id": 2, 00:30:01.023 "state": "OPEN", 00:30:01.023 "utilization": 0.0 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "id": 3, 00:30:01.023 "state": "FREE", 00:30:01.023 "utilization": 0.0 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "id": 4, 00:30:01.023 "state": "FREE", 00:30:01.023 "utilization": 0.0 00:30:01.023 } 00:30:01.023 ], 00:30:01.023 "read-only": true 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "name": "verbose_mode", 00:30:01.023 "value": true, 00:30:01.023 "unit": "", 00:30:01.023 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:01.023 }, 00:30:01.023 { 00:30:01.023 "name": "prep_upgrade_on_shutdown", 00:30:01.023 "value": false, 00:30:01.023 "unit": "", 00:30:01.023 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:01.023 } 00:30:01.023 ] 00:30:01.023 } 00:30:01.023 23:14:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:01.023 23:14:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:01.023 23:14:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.023 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:01.023 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:01.023 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:01.023 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:01.023 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:01.284 Validate MD5 checksum, iteration 1 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:01.284 23:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:01.545 [2024-11-26 23:14:40.430242] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:30:01.546 [2024-11-26 23:14:40.430612] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96264 ] 00:30:01.546 [2024-11-26 23:14:40.567854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:01.546 [2024-11-26 23:14:40.601049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.546 [2024-11-26 23:14:40.630108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.022  [2024-11-26T23:14:43.097Z] Copying: 516/1024 [MB] (516 MBps) [2024-11-26T23:14:43.669Z] Copying: 1024/1024 [MB] (average 545 MBps) 00:30:04.542 00:30:04.542 23:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:04.542 23:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:07.090 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:07.090 Validate MD5 checksum, iteration 2 00:30:07.090 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ccceb722ec061e90045368a68ef6b1a0 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ccceb722ec061e90045368a68ef6b1a0 != \c\c\c\e\b\7\2\2\e\c\0\6\1\e\9\0\0\4\5\3\6\8\a\6\8\e\f\6\b\1\a\0 ]] 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:07.091 23:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:07.091 [2024-11-26 23:14:45.805721] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:30:07.091 [2024-11-26 23:14:45.805829] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96320 ] 00:30:07.091 [2024-11-26 23:14:45.936920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:07.091 [2024-11-26 23:14:45.960434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.091 [2024-11-26 23:14:45.977292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.477  [2024-11-26T23:14:47.864Z] Copying: 672/1024 [MB] (672 MBps) [2024-11-26T23:14:48.432Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:30:09.305 00:30:09.305 23:14:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:09.305 23:14:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f3d75f62294b7a70410df82fa4be1118 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f3d75f62294b7a70410df82fa4be1118 != \f\3\d\7\5\f\6\2\2\9\4\b\7\a\7\0\4\1\0\d\f\8\2\f\a\4\b\e\1\1\1\8 ]] 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 96184 ]] 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 96184 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96370 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96370 00:30:11.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96370 ']' 00:30:11.224 23:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.225 23:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.225 23:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.225 23:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.225 23:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:11.225 [2024-11-26 23:14:50.266043] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:30:11.225 [2024-11-26 23:14:50.266164] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96370 ] 00:30:11.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 96184 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:11.491 [2024-11-26 23:14:50.402204] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:11.491 [2024-11-26 23:14:50.427612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.491 [2024-11-26 23:14:50.453767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.753 [2024-11-26 23:14:50.709656] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:11.753 [2024-11-26 23:14:50.709706] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:11.753 [2024-11-26 23:14:50.855265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.855435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:11.753 [2024-11-26 23:14:50.855457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:11.753 [2024-11-26 23:14:50.855466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.855534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.855547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:11.753 [2024-11-26 23:14:50.855556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:30:11.753 [2024-11-26 23:14:50.855564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.855586] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:11.753 [2024-11-26 23:14:50.855822] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:11.753 [2024-11-26 23:14:50.855837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.855845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:11.753 [2024-11-26 23:14:50.855853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:30:11.753 [2024-11-26 23:14:50.855860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.856107] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:11.753 [2024-11-26 23:14:50.860783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.860819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:11.753 [2024-11-26 23:14:50.860831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.677 ms 00:30:11.753 [2024-11-26 23:14:50.860839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.861941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.861971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:11.753 [2024-11-26 23:14:50.861983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:11.753 [2024-11-26 23:14:50.861994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.862324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.862335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:11.753 [2024-11-26 23:14:50.862344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:30:11.753 [2024-11-26 23:14:50.862351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.862387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.862396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:11.753 [2024-11-26 23:14:50.862404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:11.753 [2024-11-26 23:14:50.862411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.862437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.862448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:11.753 [2024-11-26 23:14:50.862456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:11.753 [2024-11-26 23:14:50.862463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.753 [2024-11-26 23:14:50.862483] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:11.753 [2024-11-26 23:14:50.863482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.753 [2024-11-26 23:14:50.863525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:11.754 [2024-11-26 23:14:50.863546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.003 ms 00:30:11.754 [2024-11-26 23:14:50.863570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.754 [2024-11-26 23:14:50.863611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.754 [2024-11-26 23:14:50.863632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:11.754 [2024-11-26 23:14:50.863651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:11.754 [2024-11-26 23:14:50.863669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.754 [2024-11-26 23:14:50.863712] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:11.754 [2024-11-26 23:14:50.863796] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:11.754 [2024-11-26 23:14:50.863860] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:11.754 [2024-11-26 23:14:50.863906] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:11.754 [2024-11-26 23:14:50.864033] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:11.754 [2024-11-26 23:14:50.864044] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:11.754 [2024-11-26 23:14:50.864054] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:11.754 [2024-11-26 23:14:50.864065] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864077] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864085] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:11.754 [2024-11-26 23:14:50.864093] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:11.754 [2024-11-26 23:14:50.864100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:11.754 [2024-11-26 23:14:50.864106] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:11.754 [2024-11-26 23:14:50.864116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.754 [2024-11-26 23:14:50.864123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:11.754 [2024-11-26 23:14:50.864131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.406 ms 00:30:11.754 [2024-11-26 23:14:50.864138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.754 [2024-11-26 23:14:50.864227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.754 [2024-11-26 23:14:50.864237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:11.754 [2024-11-26 23:14:50.864245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:11.754 [2024-11-26 23:14:50.864253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.754 [2024-11-26 23:14:50.864371] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:11.754 [2024-11-26 23:14:50.864384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:11.754 [2024-11-26 23:14:50.864392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:11.754 [2024-11-26 23:14:50.864414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:11.754 [2024-11-26 23:14:50.864428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:11.754 [2024-11-26 23:14:50.864434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:11.754 [2024-11-26 23:14:50.864441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:11.754 [2024-11-26 23:14:50.864454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:11.754 [2024-11-26 23:14:50.864460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:11.754 [2024-11-26 23:14:50.864483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:11.754 [2024-11-26 23:14:50.864489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:11.754 [2024-11-26 23:14:50.864502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:11.754 [2024-11-26 23:14:50.864508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:11.754 [2024-11-26 23:14:50.864527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:11.754 [2024-11-26 23:14:50.864534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:11.754 [2024-11-26 23:14:50.864547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:11.754 [2024-11-26 23:14:50.864553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:11.754 [2024-11-26 23:14:50.864566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:11.754 [2024-11-26 23:14:50.864573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:11.754 [2024-11-26 23:14:50.864587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:11.754 [2024-11-26 23:14:50.864594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:11.754 [2024-11-26 23:14:50.864607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:11.754 [2024-11-26 23:14:50.864613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:11.754 [2024-11-26 23:14:50.864625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:11.754 [2024-11-26 23:14:50.864644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:11.754 [2024-11-26 23:14:50.864662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:11.754 [2024-11-26 23:14:50.864669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864675] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:11.754 [2024-11-26 23:14:50.864682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:11.754 [2024-11-26 23:14:50.864693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:11.754 [2024-11-26 23:14:50.864700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.754 [2024-11-26 23:14:50.864707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:11.754 [2024-11-26 23:14:50.864714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:11.754 [2024-11-26 23:14:50.864720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:11.754 [2024-11-26 23:14:50.864727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:11.755 [2024-11-26 23:14:50.864734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:11.755 [2024-11-26 23:14:50.864741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:11.755 [2024-11-26 23:14:50.864749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:11.755 [2024-11-26 23:14:50.864758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:11.755 [2024-11-26 23:14:50.864774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:11.755 [2024-11-26 23:14:50.864795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:11.755 [2024-11-26 23:14:50.864801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:11.755 [2024-11-26 23:14:50.864810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:11.755 [2024-11-26 23:14:50.864817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:11.755 [2024-11-26 23:14:50.864865] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:11.755 [2024-11-26 23:14:50.864897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.755 [2024-11-26 23:14:50.864912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:11.755 [2024-11-26 23:14:50.864919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:11.755 [2024-11-26 23:14:50.864926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:11.755 [2024-11-26 23:14:50.864933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.755 [2024-11-26 23:14:50.864940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:11.755 [2024-11-26 23:14:50.864950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.648 ms 00:30:11.755 [2024-11-26 23:14:50.864957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.755 [2024-11-26 23:14:50.873882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.755 [2024-11-26 23:14:50.873918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:11.755 [2024-11-26 23:14:50.873929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.875 ms 00:30:11.755 [2024-11-26 23:14:50.873940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.755 [2024-11-26 23:14:50.873975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.755 [2024-11-26 23:14:50.873986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:11.755 [2024-11-26 23:14:50.873995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:11.755 [2024-11-26 23:14:50.874001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.017 [2024-11-26 23:14:50.884953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.017 [2024-11-26 23:14:50.884988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:12.017 [2024-11-26 23:14:50.884999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.900 ms 00:30:12.017 [2024-11-26 23:14:50.885007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.017 [2024-11-26 23:14:50.885037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.017 [2024-11-26 23:14:50.885049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:12.017 [2024-11-26 23:14:50.885057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:12.017 [2024-11-26 23:14:50.885064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.017 [2024-11-26 23:14:50.885150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.017 [2024-11-26 23:14:50.885160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:12.017 [2024-11-26 23:14:50.885168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:12.017 [2024-11-26 23:14:50.885176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.017 [2024-11-26 23:14:50.885215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.017 [2024-11-26 23:14:50.885226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:12.017 [2024-11-26 23:14:50.885235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:12.017 [2024-11-26 23:14:50.885243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.017 [2024-11-26 23:14:50.892483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.017 [2024-11-26 23:14:50.892513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:12.017 [2024-11-26 23:14:50.892523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.196 ms 00:30:12.017 [2024-11-26 23:14:50.892531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.017 [2024-11-26 23:14:50.892618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.892628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:12.018 [2024-11-26 23:14:50.892637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:12.018 [2024-11-26 23:14:50.892644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.909285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.909338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:12.018 [2024-11-26 23:14:50.909351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.622 ms 00:30:12.018 [2024-11-26 23:14:50.909360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.910912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.910952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:12.018 [2024-11-26 23:14:50.910972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:30:12.018 [2024-11-26 23:14:50.910982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.929772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.929824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:12.018 [2024-11-26 23:14:50.929838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.739 ms 00:30:12.018 [2024-11-26 23:14:50.929846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.929984] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:12.018 [2024-11-26 23:14:50.930089] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:12.018 [2024-11-26 23:14:50.930187] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:12.018 [2024-11-26 23:14:50.930290] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:12.018 [2024-11-26 23:14:50.930317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.930325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:12.018 [2024-11-26 23:14:50.930334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.431 ms 00:30:12.018 [2024-11-26 23:14:50.930342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.930393] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:12.018 [2024-11-26 23:14:50.930404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.930412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:12.018 [2024-11-26 23:14:50.930420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:12.018 [2024-11-26 23:14:50.930431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.934063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.934223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:12.018 [2024-11-26 23:14:50.934240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.610 ms 00:30:12.018 [2024-11-26 23:14:50.934249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.935143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.935173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:12.018 [2024-11-26 23:14:50.935185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:12.018 [2024-11-26 23:14:50.935194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.018 [2024-11-26 23:14:50.935280] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:12.018 [2024-11-26 23:14:50.935485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.018 [2024-11-26 23:14:50.935497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:12.018 [2024-11-26 23:14:50.935509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:30:12.018 [2024-11-26 23:14:50.935521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.963 [2024-11-26 23:14:51.803936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.963 [2024-11-26 23:14:51.804036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:12.963 [2024-11-26 23:14:51.804056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 868.091 ms 00:30:12.963 [2024-11-26 23:14:51.804066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.963 [2024-11-26 23:14:51.806653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.963 [2024-11-26 23:14:51.806725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:12.963 [2024-11-26 23:14:51.806740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.003 ms 00:30:12.963 [2024-11-26 23:14:51.806750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.963 [2024-11-26 23:14:51.807423] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:12.963 [2024-11-26 23:14:51.807464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.963 [2024-11-26 23:14:51.807474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:12.963 [2024-11-26 23:14:51.807486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.676 ms 00:30:12.963 [2024-11-26 23:14:51.807504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.963 [2024-11-26 23:14:51.808325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.963 [2024-11-26 23:14:51.808399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:12.963 [2024-11-26 23:14:51.808413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:12.963 [2024-11-26 23:14:51.808422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.963 [2024-11-26 23:14:51.808480] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 873.191 ms, result 0 00:30:12.963 [2024-11-26 23:14:51.808526] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:12.963 [2024-11-26 23:14:51.808620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.963 [2024-11-26 23:14:51.808631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:12.963 [2024-11-26 23:14:51.808640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:30:12.963 [2024-11-26 23:14:51.808648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.811567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.811841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:13.906 [2024-11-26 23:14:52.811866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1002.310 ms 00:30:13.906 [2024-11-26 23:14:52.811876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.813980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.814019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:13.906 [2024-11-26 23:14:52.814031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.663 ms 00:30:13.906 [2024-11-26 23:14:52.814039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.815206] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:13.906 [2024-11-26 23:14:52.815277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.815287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:13.906 [2024-11-26 23:14:52.815306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.207 ms 00:30:13.906 [2024-11-26 23:14:52.815315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.815352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.815362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:13.906 [2024-11-26 23:14:52.815371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:13.906 [2024-11-26 23:14:52.815379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.815417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 1006.885 ms, result 0 00:30:13.906 [2024-11-26 23:14:52.815466] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:13.906 [2024-11-26 23:14:52.815477] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:13.906 [2024-11-26 23:14:52.815487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.815500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:13.906 [2024-11-26 23:14:52.815509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1880.228 ms 00:30:13.906 [2024-11-26 23:14:52.815517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.815546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.815555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:13.906 [2024-11-26 23:14:52.815563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:13.906 [2024-11-26 23:14:52.815570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.824721] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:13.906 [2024-11-26 23:14:52.824856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.824868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:13.906 [2024-11-26 23:14:52.824877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.270 ms 00:30:13.906 [2024-11-26 23:14:52.824886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.825633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.825689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:13.906 [2024-11-26 23:14:52.825700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.677 ms 00:30:13.906 [2024-11-26 23:14:52.825713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.827940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.827970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:13.906 [2024-11-26 23:14:52.827980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.209 ms 00:30:13.906 [2024-11-26 23:14:52.827988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.828040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.828050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:13.906 [2024-11-26 23:14:52.828059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:13.906 [2024-11-26 23:14:52.828066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.828184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.828198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:13.906 [2024-11-26 23:14:52.828207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:13.906 [2024-11-26 23:14:52.828214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.828236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.828249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:13.906 [2024-11-26 23:14:52.828257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:13.906 [2024-11-26 23:14:52.828265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.828307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:13.906 [2024-11-26 23:14:52.828319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.828332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:13.906 [2024-11-26 23:14:52.828341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:13.906 [2024-11-26 23:14:52.828349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.828403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.906 [2024-11-26 23:14:52.828412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:13.906 [2024-11-26 23:14:52.828420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:13.906 [2024-11-26 23:14:52.828428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.906 [2024-11-26 23:14:52.829521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1973.720 ms, result 0 00:30:13.906 [2024-11-26 23:14:52.845234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.906 [2024-11-26 23:14:52.861218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:13.906 [2024-11-26 23:14:52.869462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:13.906 Validate MD5 checksum, iteration 1 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:13.906 23:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:13.906 [2024-11-26 23:14:52.992193] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:30:13.906 [2024-11-26 23:14:52.992829] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96405 ] 00:30:14.164 [2024-11-26 23:14:53.132698] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:14.164 [2024-11-26 23:14:53.161626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.164 [2024-11-26 23:14:53.180281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.549  [2024-11-26T23:14:55.629Z] Copying: 560/1024 [MB] (560 MBps) [2024-11-26T23:14:59.848Z] Copying: 1024/1024 [MB] (average 549 MBps) 00:30:20.721 00:30:20.721 23:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:20.721 23:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:22.631 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:22.631 Validate MD5 checksum, iteration 2 00:30:22.631 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ccceb722ec061e90045368a68ef6b1a0 00:30:22.631 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ccceb722ec061e90045368a68ef6b1a0 != \c\c\c\e\b\7\2\2\e\c\0\6\1\e\9\0\0\4\5\3\6\8\a\6\8\e\f\6\b\1\a\0 ]] 00:30:22.631 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:22.631 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:22.631 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:22.632 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:22.632 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:22.632 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:22.632 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:22.632 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:22.632 23:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:22.632 [2024-11-26 23:15:01.317734] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:30:22.632 [2024-11-26 23:15:01.317974] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96498 ] 00:30:22.632 [2024-11-26 23:15:01.450162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:22.632 [2024-11-26 23:15:01.480275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.632 [2024-11-26 23:15:01.499412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.016  [2024-11-26T23:15:03.714Z] Copying: 542/1024 [MB] (542 MBps) [2024-11-26T23:15:06.248Z] Copying: 1024/1024 [MB] (average 558 MBps) 00:30:27.121 00:30:27.121 23:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:27.121 23:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f3d75f62294b7a70410df82fa4be1118 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f3d75f62294b7a70410df82fa4be1118 != \f\3\d\7\5\f\6\2\2\9\4\b\7\a\7\0\4\1\0\d\f\8\2\f\a\4\b\e\1\1\1\8 ]] 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96370 ]] 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96370 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 96370 ']' 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 96370 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96370 00:30:29.025 killing process with pid 96370 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96370' 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 96370 00:30:29.025 23:15:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 96370 00:30:29.025 [2024-11-26 23:15:08.097188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:29.025 [2024-11-26 23:15:08.100696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.100735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:29.025 [2024-11-26 23:15:08.100748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:29.025 [2024-11-26 23:15:08.100756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.100782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:29.025 [2024-11-26 23:15:08.101218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.101246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:29.025 [2024-11-26 23:15:08.101256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:30:29.025 [2024-11-26 23:15:08.101263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.101524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.101543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:29.025 [2024-11-26 23:15:08.101553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:30:29.025 [2024-11-26 23:15:08.101562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.103271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.103464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:29.025 [2024-11-26 23:15:08.103484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.692 ms 00:30:29.025 [2024-11-26 23:15:08.103493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.104937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.104975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:29.025 [2024-11-26 23:15:08.104987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.149 ms 00:30:29.025 [2024-11-26 23:15:08.104995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.107678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.107721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:29.025 [2024-11-26 23:15:08.107731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.637 ms 00:30:29.025 [2024-11-26 23:15:08.107739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.109370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.109417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:29.025 [2024-11-26 23:15:08.109427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.597 ms 00:30:29.025 [2024-11-26 23:15:08.109434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.109526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.109537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:29.025 [2024-11-26 23:15:08.109550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:29.025 [2024-11-26 23:15:08.109557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.111817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.111849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:29.025 [2024-11-26 23:15:08.111858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.243 ms 00:30:29.025 [2024-11-26 23:15:08.111865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.114103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.114134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:29.025 [2024-11-26 23:15:08.114143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.206 ms 00:30:29.025 [2024-11-26 23:15:08.114150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.116422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.116454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:29.025 [2024-11-26 23:15:08.116464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.239 ms 00:30:29.025 [2024-11-26 23:15:08.116471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.118419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.025 [2024-11-26 23:15:08.118451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:29.025 [2024-11-26 23:15:08.118460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.888 ms 00:30:29.025 [2024-11-26 23:15:08.118467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.025 [2024-11-26 23:15:08.118499] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:29.025 [2024-11-26 23:15:08.118513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:29.025 [2024-11-26 23:15:08.118523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:29.025 [2024-11-26 23:15:08.118531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:29.025 [2024-11-26 23:15:08.118539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:29.025 [2024-11-26 23:15:08.118654] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:29.025 [2024-11-26 23:15:08.118661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b8d864d2-5103-4c9b-aef1-1f6d395a70b5 00:30:29.025 [2024-11-26 23:15:08.118669] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:29.025 [2024-11-26 23:15:08.118676] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:29.026 [2024-11-26 23:15:08.118683] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:29.026 [2024-11-26 23:15:08.118691] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:29.026 [2024-11-26 23:15:08.118702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:29.026 [2024-11-26 23:15:08.118711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:29.026 [2024-11-26 23:15:08.118719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:29.026 [2024-11-26 23:15:08.118726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:29.026 [2024-11-26 23:15:08.118733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:29.026 [2024-11-26 23:15:08.118741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.026 [2024-11-26 23:15:08.118750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:29.026 [2024-11-26 23:15:08.118759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:30:29.026 [2024-11-26 23:15:08.118766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.120355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.026 [2024-11-26 23:15:08.120380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:29.026 [2024-11-26 23:15:08.120395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.571 ms 00:30:29.026 [2024-11-26 23:15:08.120402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.120483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.026 [2024-11-26 23:15:08.120491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:29.026 [2024-11-26 23:15:08.120499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:30:29.026 [2024-11-26 23:15:08.120506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.126046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.126217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:29.026 [2024-11-26 23:15:08.126239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.126247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.126276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.126285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:29.026 [2024-11-26 23:15:08.126311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.126324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.126378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.126389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:29.026 [2024-11-26 23:15:08.126397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.126407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.126425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.126438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:29.026 [2024-11-26 23:15:08.126446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.126453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.136338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.136373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:29.026 [2024-11-26 23:15:08.136391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.136398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.143811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.144000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:29.026 [2024-11-26 23:15:08.144014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.144022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.144093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.144102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:29.026 [2024-11-26 23:15:08.144114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.144121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.144155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.144169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:29.026 [2024-11-26 23:15:08.144176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.144184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.144251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.144260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:29.026 [2024-11-26 23:15:08.144268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.144275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.144490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.144524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:29.026 [2024-11-26 23:15:08.144544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.144564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.144614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.144638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:29.026 [2024-11-26 23:15:08.144657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.144682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.144741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:29.026 [2024-11-26 23:15:08.144882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:29.026 [2024-11-26 23:15:08.144899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:29.026 [2024-11-26 23:15:08.144908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.026 [2024-11-26 23:15:08.145049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 44.319 ms, result 0 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:29.286 Remove shared memory files 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid96184 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:29.286 ************************************ 00:30:29.286 END TEST ftl_upgrade_shutdown 00:30:29.286 ************************************ 00:30:29.286 00:30:29.286 real 1m16.683s 00:30:29.286 user 1m41.264s 00:30:29.286 sys 0m19.783s 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.286 23:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:29.286 23:15:08 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:29.286 23:15:08 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:29.286 23:15:08 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:29.286 23:15:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.286 23:15:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:29.286 ************************************ 00:30:29.286 START TEST ftl_restore_fast 00:30:29.286 ************************************ 00:30:29.286 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:29.547 * Looking for test storage... 00:30:29.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.547 23:15:08 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.548 --rc genhtml_branch_coverage=1 00:30:29.548 --rc genhtml_function_coverage=1 00:30:29.548 --rc genhtml_legend=1 00:30:29.548 --rc geninfo_all_blocks=1 00:30:29.548 --rc geninfo_unexecuted_blocks=1 00:30:29.548 00:30:29.548 ' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.548 --rc genhtml_branch_coverage=1 00:30:29.548 --rc genhtml_function_coverage=1 00:30:29.548 --rc genhtml_legend=1 00:30:29.548 --rc geninfo_all_blocks=1 00:30:29.548 --rc geninfo_unexecuted_blocks=1 00:30:29.548 00:30:29.548 ' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.548 --rc genhtml_branch_coverage=1 00:30:29.548 --rc genhtml_function_coverage=1 00:30:29.548 --rc genhtml_legend=1 00:30:29.548 --rc geninfo_all_blocks=1 00:30:29.548 --rc geninfo_unexecuted_blocks=1 00:30:29.548 00:30:29.548 ' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.548 --rc genhtml_branch_coverage=1 00:30:29.548 --rc genhtml_function_coverage=1 00:30:29.548 --rc genhtml_legend=1 00:30:29.548 --rc geninfo_all_blocks=1 00:30:29.548 --rc geninfo_unexecuted_blocks=1 00:30:29.548 00:30:29.548 ' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.mi99f2Gtfy 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96649 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96649 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 96649 ']' 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.548 23:15:08 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:29.809 [2024-11-26 23:15:08.674229] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:30:29.809 [2024-11-26 23:15:08.674615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96649 ] 00:30:29.809 [2024-11-26 23:15:08.816004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:29.809 [2024-11-26 23:15:08.844908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.809 [2024-11-26 23:15:08.865098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:30.379 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:30.951 { 00:30:30.951 "name": "nvme0n1", 00:30:30.951 "aliases": [ 00:30:30.951 "fd1e32bd-6f69-47dc-aa82-075d569533cc" 00:30:30.951 ], 00:30:30.951 "product_name": "NVMe disk", 00:30:30.951 "block_size": 4096, 00:30:30.951 "num_blocks": 1310720, 00:30:30.951 "uuid": "fd1e32bd-6f69-47dc-aa82-075d569533cc", 00:30:30.951 "numa_id": -1, 00:30:30.951 "assigned_rate_limits": { 00:30:30.951 "rw_ios_per_sec": 0, 00:30:30.951 "rw_mbytes_per_sec": 0, 00:30:30.951 "r_mbytes_per_sec": 0, 00:30:30.951 "w_mbytes_per_sec": 0 00:30:30.951 }, 00:30:30.951 "claimed": true, 00:30:30.951 "claim_type": "read_many_write_one", 00:30:30.951 "zoned": false, 00:30:30.951 "supported_io_types": { 00:30:30.951 "read": true, 00:30:30.951 "write": true, 00:30:30.951 "unmap": true, 00:30:30.951 "flush": true, 00:30:30.951 "reset": true, 00:30:30.951 "nvme_admin": true, 00:30:30.951 "nvme_io": true, 00:30:30.951 "nvme_io_md": false, 00:30:30.951 "write_zeroes": true, 00:30:30.951 "zcopy": false, 00:30:30.951 "get_zone_info": false, 00:30:30.951 "zone_management": false, 00:30:30.951 "zone_append": false, 00:30:30.951 "compare": true, 00:30:30.951 "compare_and_write": false, 00:30:30.951 "abort": true, 00:30:30.951 "seek_hole": false, 00:30:30.951 "seek_data": false, 00:30:30.951 "copy": true, 00:30:30.951 "nvme_iov_md": false 00:30:30.951 }, 00:30:30.951 "driver_specific": { 00:30:30.951 "nvme": [ 00:30:30.951 { 00:30:30.951 "pci_address": "0000:00:11.0", 00:30:30.951 "trid": { 00:30:30.951 "trtype": "PCIe", 00:30:30.951 "traddr": "0000:00:11.0" 00:30:30.951 }, 00:30:30.951 "ctrlr_data": { 00:30:30.951 "cntlid": 0, 00:30:30.951 "vendor_id": "0x1b36", 00:30:30.951 "model_number": "QEMU NVMe Ctrl", 00:30:30.951 "serial_number": "12341", 00:30:30.951 "firmware_revision": "8.0.0", 00:30:30.951 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:30.951 "oacs": { 00:30:30.951 "security": 0, 00:30:30.951 "format": 1, 00:30:30.951 "firmware": 0, 00:30:30.951 "ns_manage": 1 00:30:30.951 }, 00:30:30.951 "multi_ctrlr": false, 00:30:30.951 "ana_reporting": false 00:30:30.951 }, 00:30:30.951 "vs": { 00:30:30.951 "nvme_version": "1.4" 00:30:30.951 }, 00:30:30.951 "ns_data": { 00:30:30.951 "id": 1, 00:30:30.951 "can_share": false 00:30:30.951 } 00:30:30.951 } 00:30:30.951 ], 00:30:30.951 "mp_policy": "active_passive" 00:30:30.951 } 00:30:30.951 } 00:30:30.951 ]' 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:30.951 23:15:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:31.213 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=c5638d42-c8f0-478a-8a50-6e040843948c 00:30:31.213 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:31.213 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5638d42-c8f0-478a-8a50-6e040843948c 00:30:31.477 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=243a5218-39d1-4fc1-9890-ba9c000ee999 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 243a5218-39d1-4fc1-9890-ba9c000ee999 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:31.739 23:15:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:31.999 { 00:30:31.999 "name": "6b45615a-7d03-43e4-8802-d4dad78974b0", 00:30:31.999 "aliases": [ 00:30:31.999 "lvs/nvme0n1p0" 00:30:31.999 ], 00:30:31.999 "product_name": "Logical Volume", 00:30:31.999 "block_size": 4096, 00:30:31.999 "num_blocks": 26476544, 00:30:31.999 "uuid": "6b45615a-7d03-43e4-8802-d4dad78974b0", 00:30:31.999 "assigned_rate_limits": { 00:30:31.999 "rw_ios_per_sec": 0, 00:30:31.999 "rw_mbytes_per_sec": 0, 00:30:31.999 "r_mbytes_per_sec": 0, 00:30:31.999 "w_mbytes_per_sec": 0 00:30:31.999 }, 00:30:31.999 "claimed": false, 00:30:31.999 "zoned": false, 00:30:31.999 "supported_io_types": { 00:30:31.999 "read": true, 00:30:31.999 "write": true, 00:30:31.999 "unmap": true, 00:30:31.999 "flush": false, 00:30:31.999 "reset": true, 00:30:31.999 "nvme_admin": false, 00:30:31.999 "nvme_io": false, 00:30:31.999 "nvme_io_md": false, 00:30:31.999 "write_zeroes": true, 00:30:31.999 "zcopy": false, 00:30:31.999 "get_zone_info": false, 00:30:31.999 "zone_management": false, 00:30:31.999 "zone_append": false, 00:30:31.999 "compare": false, 00:30:31.999 "compare_and_write": false, 00:30:31.999 "abort": false, 00:30:31.999 "seek_hole": true, 00:30:31.999 "seek_data": true, 00:30:31.999 "copy": false, 00:30:31.999 "nvme_iov_md": false 00:30:31.999 }, 00:30:31.999 "driver_specific": { 00:30:31.999 "lvol": { 00:30:31.999 "lvol_store_uuid": "243a5218-39d1-4fc1-9890-ba9c000ee999", 00:30:31.999 "base_bdev": "nvme0n1", 00:30:31.999 "thin_provision": true, 00:30:31.999 "num_allocated_clusters": 0, 00:30:31.999 "snapshot": false, 00:30:31.999 "clone": false, 00:30:31.999 "esnap_clone": false 00:30:31.999 } 00:30:31.999 } 00:30:31.999 } 00:30:31.999 ]' 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:31.999 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:32.260 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:32.520 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:32.520 { 00:30:32.520 "name": "6b45615a-7d03-43e4-8802-d4dad78974b0", 00:30:32.520 "aliases": [ 00:30:32.520 "lvs/nvme0n1p0" 00:30:32.520 ], 00:30:32.520 "product_name": "Logical Volume", 00:30:32.520 "block_size": 4096, 00:30:32.520 "num_blocks": 26476544, 00:30:32.520 "uuid": "6b45615a-7d03-43e4-8802-d4dad78974b0", 00:30:32.520 "assigned_rate_limits": { 00:30:32.520 "rw_ios_per_sec": 0, 00:30:32.520 "rw_mbytes_per_sec": 0, 00:30:32.520 "r_mbytes_per_sec": 0, 00:30:32.520 "w_mbytes_per_sec": 0 00:30:32.520 }, 00:30:32.520 "claimed": false, 00:30:32.520 "zoned": false, 00:30:32.521 "supported_io_types": { 00:30:32.521 "read": true, 00:30:32.521 "write": true, 00:30:32.521 "unmap": true, 00:30:32.521 "flush": false, 00:30:32.521 "reset": true, 00:30:32.521 "nvme_admin": false, 00:30:32.521 "nvme_io": false, 00:30:32.521 "nvme_io_md": false, 00:30:32.521 "write_zeroes": true, 00:30:32.521 "zcopy": false, 00:30:32.521 "get_zone_info": false, 00:30:32.521 "zone_management": false, 00:30:32.521 "zone_append": false, 00:30:32.521 "compare": false, 00:30:32.521 "compare_and_write": false, 00:30:32.521 "abort": false, 00:30:32.521 "seek_hole": true, 00:30:32.521 "seek_data": true, 00:30:32.521 "copy": false, 00:30:32.521 "nvme_iov_md": false 00:30:32.521 }, 00:30:32.521 "driver_specific": { 00:30:32.521 "lvol": { 00:30:32.521 "lvol_store_uuid": "243a5218-39d1-4fc1-9890-ba9c000ee999", 00:30:32.521 "base_bdev": "nvme0n1", 00:30:32.521 "thin_provision": true, 00:30:32.521 "num_allocated_clusters": 0, 00:30:32.521 "snapshot": false, 00:30:32.521 "clone": false, 00:30:32.521 "esnap_clone": false 00:30:32.521 } 00:30:32.521 } 00:30:32.521 } 00:30:32.521 ]' 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:32.521 23:15:11 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:32.782 23:15:11 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:32.782 23:15:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:32.782 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:32.782 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:32.782 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:32.782 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:32.782 23:15:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b45615a-7d03-43e4-8802-d4dad78974b0 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:33.044 { 00:30:33.044 "name": "6b45615a-7d03-43e4-8802-d4dad78974b0", 00:30:33.044 "aliases": [ 00:30:33.044 "lvs/nvme0n1p0" 00:30:33.044 ], 00:30:33.044 "product_name": "Logical Volume", 00:30:33.044 "block_size": 4096, 00:30:33.044 "num_blocks": 26476544, 00:30:33.044 "uuid": "6b45615a-7d03-43e4-8802-d4dad78974b0", 00:30:33.044 "assigned_rate_limits": { 00:30:33.044 "rw_ios_per_sec": 0, 00:30:33.044 "rw_mbytes_per_sec": 0, 00:30:33.044 "r_mbytes_per_sec": 0, 00:30:33.044 "w_mbytes_per_sec": 0 00:30:33.044 }, 00:30:33.044 "claimed": false, 00:30:33.044 "zoned": false, 00:30:33.044 "supported_io_types": { 00:30:33.044 "read": true, 00:30:33.044 "write": true, 00:30:33.044 "unmap": true, 00:30:33.044 "flush": false, 00:30:33.044 "reset": true, 00:30:33.044 "nvme_admin": false, 00:30:33.044 "nvme_io": false, 00:30:33.044 "nvme_io_md": false, 00:30:33.044 "write_zeroes": true, 00:30:33.044 "zcopy": false, 00:30:33.044 "get_zone_info": false, 00:30:33.044 "zone_management": false, 00:30:33.044 "zone_append": false, 00:30:33.044 "compare": false, 00:30:33.044 "compare_and_write": false, 00:30:33.044 "abort": false, 00:30:33.044 "seek_hole": true, 00:30:33.044 "seek_data": true, 00:30:33.044 "copy": false, 00:30:33.044 "nvme_iov_md": false 00:30:33.044 }, 00:30:33.044 "driver_specific": { 00:30:33.044 "lvol": { 00:30:33.044 "lvol_store_uuid": "243a5218-39d1-4fc1-9890-ba9c000ee999", 00:30:33.044 "base_bdev": "nvme0n1", 00:30:33.044 "thin_provision": true, 00:30:33.044 "num_allocated_clusters": 0, 00:30:33.044 "snapshot": false, 00:30:33.044 "clone": false, 00:30:33.044 "esnap_clone": false 00:30:33.044 } 00:30:33.044 } 00:30:33.044 } 00:30:33.044 ]' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6b45615a-7d03-43e4-8802-d4dad78974b0 --l2p_dram_limit 10' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:33.044 23:15:12 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6b45615a-7d03-43e4-8802-d4dad78974b0 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:33.306 [2024-11-26 23:15:12.247406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.247448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:33.306 [2024-11-26 23:15:12.247460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:33.306 [2024-11-26 23:15:12.247467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.247514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.247524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:33.306 [2024-11-26 23:15:12.247537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:33.306 [2024-11-26 23:15:12.247543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.247562] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:33.306 [2024-11-26 23:15:12.247772] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:33.306 [2024-11-26 23:15:12.247785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.247792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:33.306 [2024-11-26 23:15:12.247817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:30:33.306 [2024-11-26 23:15:12.247823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.247847] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7086a201-ebb6-43da-bca8-1ef81f58afda 00:30:33.306 [2024-11-26 23:15:12.248848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.248873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:33.306 [2024-11-26 23:15:12.248884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:33.306 [2024-11-26 23:15:12.248894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.253622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.253650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:33.306 [2024-11-26 23:15:12.253658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:30:33.306 [2024-11-26 23:15:12.253671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.253737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.253746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:33.306 [2024-11-26 23:15:12.253752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:33.306 [2024-11-26 23:15:12.253760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.253810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.253823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:33.306 [2024-11-26 23:15:12.253829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:33.306 [2024-11-26 23:15:12.253836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.253855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:33.306 [2024-11-26 23:15:12.255112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.255136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:33.306 [2024-11-26 23:15:12.255146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:30:33.306 [2024-11-26 23:15:12.255153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.255182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.255190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:33.306 [2024-11-26 23:15:12.255200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:33.306 [2024-11-26 23:15:12.255206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.255229] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:33.306 [2024-11-26 23:15:12.255346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:33.306 [2024-11-26 23:15:12.255359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:33.306 [2024-11-26 23:15:12.255370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:33.306 [2024-11-26 23:15:12.255384] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255393] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255404] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:33.306 [2024-11-26 23:15:12.255412] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:33.306 [2024-11-26 23:15:12.255420] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:33.306 [2024-11-26 23:15:12.255426] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:33.306 [2024-11-26 23:15:12.255434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.255440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:33.306 [2024-11-26 23:15:12.255448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:30:33.306 [2024-11-26 23:15:12.255453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.255519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.306 [2024-11-26 23:15:12.255527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:33.306 [2024-11-26 23:15:12.255536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:33.306 [2024-11-26 23:15:12.255541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.306 [2024-11-26 23:15:12.255612] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:33.306 [2024-11-26 23:15:12.255620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:33.306 [2024-11-26 23:15:12.255627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:33.306 [2024-11-26 23:15:12.255646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:33.306 [2024-11-26 23:15:12.255664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:33.306 [2024-11-26 23:15:12.255675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:33.306 [2024-11-26 23:15:12.255681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:33.306 [2024-11-26 23:15:12.255690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:33.306 [2024-11-26 23:15:12.255695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:33.306 [2024-11-26 23:15:12.255703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:33.306 [2024-11-26 23:15:12.255708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:33.306 [2024-11-26 23:15:12.255719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:33.306 [2024-11-26 23:15:12.255736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:33.306 [2024-11-26 23:15:12.255752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:33.306 [2024-11-26 23:15:12.255769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:33.306 [2024-11-26 23:15:12.255787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:33.306 [2024-11-26 23:15:12.255800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:33.306 [2024-11-26 23:15:12.255806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:33.306 [2024-11-26 23:15:12.255811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:33.306 [2024-11-26 23:15:12.255817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:33.306 [2024-11-26 23:15:12.255822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:33.307 [2024-11-26 23:15:12.255828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:33.307 [2024-11-26 23:15:12.255832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:33.307 [2024-11-26 23:15:12.255840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:33.307 [2024-11-26 23:15:12.255844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.307 [2024-11-26 23:15:12.255850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:33.307 [2024-11-26 23:15:12.255855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:33.307 [2024-11-26 23:15:12.255861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.307 [2024-11-26 23:15:12.255868] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:33.307 [2024-11-26 23:15:12.255877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:33.307 [2024-11-26 23:15:12.255882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:33.307 [2024-11-26 23:15:12.255892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.307 [2024-11-26 23:15:12.255898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:33.307 [2024-11-26 23:15:12.255905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:33.307 [2024-11-26 23:15:12.255909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:33.307 [2024-11-26 23:15:12.255916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:33.307 [2024-11-26 23:15:12.255920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:33.307 [2024-11-26 23:15:12.255926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:33.307 [2024-11-26 23:15:12.255934] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:33.307 [2024-11-26 23:15:12.255942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.307 [2024-11-26 23:15:12.255949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:33.307 [2024-11-26 23:15:12.255956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:33.307 [2024-11-26 23:15:12.255962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:33.307 [2024-11-26 23:15:12.255968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:33.307 [2024-11-26 23:15:12.255974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:33.307 [2024-11-26 23:15:12.255982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:33.307 [2024-11-26 23:15:12.255987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:33.307 [2024-11-26 23:15:12.255994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:33.307 [2024-11-26 23:15:12.255999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:33.307 [2024-11-26 23:15:12.256006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:33.307 [2024-11-26 23:15:12.256012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:33.307 [2024-11-26 23:15:12.256018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:33.307 [2024-11-26 23:15:12.256023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:33.307 [2024-11-26 23:15:12.256030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:33.307 [2024-11-26 23:15:12.256036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:33.307 [2024-11-26 23:15:12.256043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.307 [2024-11-26 23:15:12.256051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:33.307 [2024-11-26 23:15:12.256058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:33.307 [2024-11-26 23:15:12.256065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:33.307 [2024-11-26 23:15:12.256072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:33.307 [2024-11-26 23:15:12.256078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.307 [2024-11-26 23:15:12.256086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:33.307 [2024-11-26 23:15:12.256091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:30:33.307 [2024-11-26 23:15:12.256100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.307 [2024-11-26 23:15:12.256145] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:33.307 [2024-11-26 23:15:12.256156] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:37.510 [2024-11-26 23:15:16.087911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.088080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:37.510 [2024-11-26 23:15:16.088133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3831.753 ms 00:30:37.510 [2024-11-26 23:15:16.088154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.095570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.095704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:37.510 [2024-11-26 23:15:16.095754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.343 ms 00:30:37.510 [2024-11-26 23:15:16.095776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.095869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.095891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:37.510 [2024-11-26 23:15:16.095907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:37.510 [2024-11-26 23:15:16.095923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.103380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.103497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:37.510 [2024-11-26 23:15:16.103545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.370 ms 00:30:37.510 [2024-11-26 23:15:16.103566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.103595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.103612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:37.510 [2024-11-26 23:15:16.103627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:37.510 [2024-11-26 23:15:16.103643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.103937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.103973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:37.510 [2024-11-26 23:15:16.103992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:30:37.510 [2024-11-26 23:15:16.104010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.104099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.104124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:37.510 [2024-11-26 23:15:16.104141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:37.510 [2024-11-26 23:15:16.104156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.109023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.109054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:37.510 [2024-11-26 23:15:16.109062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.797 ms 00:30:37.510 [2024-11-26 23:15:16.109074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.128524] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:37.510 [2024-11-26 23:15:16.131321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.131352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:37.510 [2024-11-26 23:15:16.131366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.185 ms 00:30:37.510 [2024-11-26 23:15:16.131375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.200282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.200325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:37.510 [2024-11-26 23:15:16.200340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.863 ms 00:30:37.510 [2024-11-26 23:15:16.200346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.200481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.200489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:37.510 [2024-11-26 23:15:16.200498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:30:37.510 [2024-11-26 23:15:16.200504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.203870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.203899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:37.510 [2024-11-26 23:15:16.203909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:30:37.510 [2024-11-26 23:15:16.203915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.207155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.207266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:37.510 [2024-11-26 23:15:16.207282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:30:37.510 [2024-11-26 23:15:16.207287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.207520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.207530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:37.510 [2024-11-26 23:15:16.207540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:30:37.510 [2024-11-26 23:15:16.207545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.240850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.240951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:37.510 [2024-11-26 23:15:16.240966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.286 ms 00:30:37.510 [2024-11-26 23:15:16.240973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.245268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.245308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:37.510 [2024-11-26 23:15:16.245321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:30:37.510 [2024-11-26 23:15:16.245327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.248698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.248722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:37.510 [2024-11-26 23:15:16.248731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:30:37.510 [2024-11-26 23:15:16.248736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.252596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.252620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:37.510 [2024-11-26 23:15:16.252631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.831 ms 00:30:37.510 [2024-11-26 23:15:16.252636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.252667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.252673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:37.510 [2024-11-26 23:15:16.252682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:37.510 [2024-11-26 23:15:16.252687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.510 [2024-11-26 23:15:16.252742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.510 [2024-11-26 23:15:16.252750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:37.511 [2024-11-26 23:15:16.252757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:37.511 [2024-11-26 23:15:16.252765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.511 [2024-11-26 23:15:16.253472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4005.729 ms, result 0 00:30:37.511 { 00:30:37.511 "name": "ftl0", 00:30:37.511 "uuid": "7086a201-ebb6-43da-bca8-1ef81f58afda" 00:30:37.511 } 00:30:37.511 23:15:16 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:37.511 23:15:16 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:37.511 23:15:16 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:37.511 23:15:16 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:37.773 [2024-11-26 23:15:16.659795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.773 [2024-11-26 23:15:16.659918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:37.774 [2024-11-26 23:15:16.659931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:37.774 [2024-11-26 23:15:16.659939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.659960] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:37.774 [2024-11-26 23:15:16.660375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.660389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:37.774 [2024-11-26 23:15:16.660404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:30:37.774 [2024-11-26 23:15:16.660410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.660601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.660611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:37.774 [2024-11-26 23:15:16.660619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:30:37.774 [2024-11-26 23:15:16.660626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.663052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.663067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:37.774 [2024-11-26 23:15:16.663076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:30:37.774 [2024-11-26 23:15:16.663083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.667728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.667746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:37.774 [2024-11-26 23:15:16.667757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:30:37.774 [2024-11-26 23:15:16.667764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.669991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.670014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:37.774 [2024-11-26 23:15:16.670022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:30:37.774 [2024-11-26 23:15:16.670027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.674908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.674932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:37.774 [2024-11-26 23:15:16.674943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.851 ms 00:30:37.774 [2024-11-26 23:15:16.674949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.675041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.675048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:37.774 [2024-11-26 23:15:16.675056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:37.774 [2024-11-26 23:15:16.675062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.677474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.677496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:37.774 [2024-11-26 23:15:16.677504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:30:37.774 [2024-11-26 23:15:16.677510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.679651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.679671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:37.774 [2024-11-26 23:15:16.679679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:30:37.774 [2024-11-26 23:15:16.679685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.681153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.681175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:37.774 [2024-11-26 23:15:16.681184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:30:37.774 [2024-11-26 23:15:16.681189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.682580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.774 [2024-11-26 23:15:16.682602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:37.774 [2024-11-26 23:15:16.682610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:30:37.774 [2024-11-26 23:15:16.682615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.774 [2024-11-26 23:15:16.682641] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:37.774 [2024-11-26 23:15:16.682651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.682998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.683005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.683011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.683018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:37.774 [2024-11-26 23:15:16.683024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:37.775 [2024-11-26 23:15:16.683340] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:37.775 [2024-11-26 23:15:16.683347] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7086a201-ebb6-43da-bca8-1ef81f58afda 00:30:37.775 [2024-11-26 23:15:16.683353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:37.775 [2024-11-26 23:15:16.683361] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:37.775 [2024-11-26 23:15:16.683367] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:37.775 [2024-11-26 23:15:16.683375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:37.775 [2024-11-26 23:15:16.683381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:37.775 [2024-11-26 23:15:16.683388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:37.775 [2024-11-26 23:15:16.683394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:37.775 [2024-11-26 23:15:16.683400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:37.775 [2024-11-26 23:15:16.683406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:37.775 [2024-11-26 23:15:16.683413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.775 [2024-11-26 23:15:16.683419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:37.775 [2024-11-26 23:15:16.683427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:30:37.775 [2024-11-26 23:15:16.683433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.684434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.775 [2024-11-26 23:15:16.684457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:37.775 [2024-11-26 23:15:16.684466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:30:37.775 [2024-11-26 23:15:16.684472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.684539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.775 [2024-11-26 23:15:16.684546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:37.775 [2024-11-26 23:15:16.684554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:37.775 [2024-11-26 23:15:16.684559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.688980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.689004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:37.775 [2024-11-26 23:15:16.689012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.689018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.689057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.689063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:37.775 [2024-11-26 23:15:16.689071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.689077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.689118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.689128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:37.775 [2024-11-26 23:15:16.689135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.689140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.689154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.689161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:37.775 [2024-11-26 23:15:16.689167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.689173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.697278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.697318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:37.775 [2024-11-26 23:15:16.697327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.697334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.703890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.703918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:37.775 [2024-11-26 23:15:16.703927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.703933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.703986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.703994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:37.775 [2024-11-26 23:15:16.704003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.704008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.704035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.704041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:37.775 [2024-11-26 23:15:16.704048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.704054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.704106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.704114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:37.775 [2024-11-26 23:15:16.704122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.704129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.704156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.775 [2024-11-26 23:15:16.704163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:37.775 [2024-11-26 23:15:16.704170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.775 [2024-11-26 23:15:16.704176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.775 [2024-11-26 23:15:16.704205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.776 [2024-11-26 23:15:16.704212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:37.776 [2024-11-26 23:15:16.704220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.776 [2024-11-26 23:15:16.704227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.776 [2024-11-26 23:15:16.704260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.776 [2024-11-26 23:15:16.704267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:37.776 [2024-11-26 23:15:16.704275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.776 [2024-11-26 23:15:16.704281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.776 [2024-11-26 23:15:16.704391] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.563 ms, result 0 00:30:37.776 true 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96649 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96649 ']' 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96649 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96649 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.776 killing process with pid 96649 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96649' 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 96649 00:30:37.776 23:15:16 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 96649 00:30:43.153 23:15:22 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:47.347 262144+0 records in 00:30:47.347 262144+0 records out 00:30:47.347 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.30105 s, 250 MB/s 00:30:47.347 23:15:26 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:49.885 23:15:28 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:49.885 [2024-11-26 23:15:28.640172] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:30:49.885 [2024-11-26 23:15:28.640414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96858 ] 00:30:49.885 [2024-11-26 23:15:28.767114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:49.885 [2024-11-26 23:15:28.794384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.885 [2024-11-26 23:15:28.811151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.885 [2024-11-26 23:15:28.893808] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:49.885 [2024-11-26 23:15:28.893863] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:50.148 [2024-11-26 23:15:29.043180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.148 [2024-11-26 23:15:29.043211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:50.148 [2024-11-26 23:15:29.043221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:50.148 [2024-11-26 23:15:29.043227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.148 [2024-11-26 23:15:29.043268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.148 [2024-11-26 23:15:29.043279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:50.148 [2024-11-26 23:15:29.043285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:50.148 [2024-11-26 23:15:29.043292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.148 [2024-11-26 23:15:29.043314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:50.148 [2024-11-26 23:15:29.043486] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:50.148 [2024-11-26 23:15:29.043499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.148 [2024-11-26 23:15:29.043506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:50.148 [2024-11-26 23:15:29.043513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:30:50.148 [2024-11-26 23:15:29.043518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.148 [2024-11-26 23:15:29.044425] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:50.148 [2024-11-26 23:15:29.046516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.148 [2024-11-26 23:15:29.046542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:50.148 [2024-11-26 23:15:29.046555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:30:50.148 [2024-11-26 23:15:29.046561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.148 [2024-11-26 23:15:29.046608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.148 [2024-11-26 23:15:29.046618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:50.148 [2024-11-26 23:15:29.046625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:50.148 [2024-11-26 23:15:29.046631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.148 [2024-11-26 23:15:29.050969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.148 [2024-11-26 23:15:29.050992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:50.149 [2024-11-26 23:15:29.051002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:30:50.149 [2024-11-26 23:15:29.051008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.149 [2024-11-26 23:15:29.051073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.149 [2024-11-26 23:15:29.051080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:50.149 [2024-11-26 23:15:29.051087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:50.149 [2024-11-26 23:15:29.051096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.149 [2024-11-26 23:15:29.051133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.149 [2024-11-26 23:15:29.051142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:50.149 [2024-11-26 23:15:29.051152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:50.149 [2024-11-26 23:15:29.051158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.149 [2024-11-26 23:15:29.051173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:50.149 [2024-11-26 23:15:29.052316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.149 [2024-11-26 23:15:29.052334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:50.149 [2024-11-26 23:15:29.052347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:30:50.149 [2024-11-26 23:15:29.052354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.149 [2024-11-26 23:15:29.052375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.149 [2024-11-26 23:15:29.052382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:50.149 [2024-11-26 23:15:29.052390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:50.149 [2024-11-26 23:15:29.052396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.149 [2024-11-26 23:15:29.052412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:50.149 [2024-11-26 23:15:29.052426] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:50.149 [2024-11-26 23:15:29.052452] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:50.149 [2024-11-26 23:15:29.052464] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:50.149 [2024-11-26 23:15:29.052544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:50.149 [2024-11-26 23:15:29.052555] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:50.149 [2024-11-26 23:15:29.052563] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:50.149 [2024-11-26 23:15:29.052571] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052578] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052586] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:50.149 [2024-11-26 23:15:29.052592] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:50.149 [2024-11-26 23:15:29.052598] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:50.149 [2024-11-26 23:15:29.052606] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:50.149 [2024-11-26 23:15:29.052612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.149 [2024-11-26 23:15:29.052619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:50.149 [2024-11-26 23:15:29.052626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:30:50.149 [2024-11-26 23:15:29.052634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.149 [2024-11-26 23:15:29.052699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.149 [2024-11-26 23:15:29.052706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:50.149 [2024-11-26 23:15:29.052712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:50.149 [2024-11-26 23:15:29.052717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.149 [2024-11-26 23:15:29.052792] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:50.149 [2024-11-26 23:15:29.052800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:50.149 [2024-11-26 23:15:29.052807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:50.149 [2024-11-26 23:15:29.052826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:50.149 [2024-11-26 23:15:29.052850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:50.149 [2024-11-26 23:15:29.052860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:50.149 [2024-11-26 23:15:29.052867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:50.149 [2024-11-26 23:15:29.052872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:50.149 [2024-11-26 23:15:29.052877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:50.149 [2024-11-26 23:15:29.052882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:50.149 [2024-11-26 23:15:29.052887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:50.149 [2024-11-26 23:15:29.052899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:50.149 [2024-11-26 23:15:29.052914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:50.149 [2024-11-26 23:15:29.052928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:50.149 [2024-11-26 23:15:29.052943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:50.149 [2024-11-26 23:15:29.052962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.149 [2024-11-26 23:15:29.052974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:50.149 [2024-11-26 23:15:29.052980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:50.149 [2024-11-26 23:15:29.052986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:50.149 [2024-11-26 23:15:29.052991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:50.149 [2024-11-26 23:15:29.052997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:50.149 [2024-11-26 23:15:29.053002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:50.149 [2024-11-26 23:15:29.053008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:50.149 [2024-11-26 23:15:29.053014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:50.149 [2024-11-26 23:15:29.053019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.149 [2024-11-26 23:15:29.053026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:50.149 [2024-11-26 23:15:29.053032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:50.149 [2024-11-26 23:15:29.053038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.149 [2024-11-26 23:15:29.053046] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:50.149 [2024-11-26 23:15:29.053053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:50.149 [2024-11-26 23:15:29.053059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:50.149 [2024-11-26 23:15:29.053065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.149 [2024-11-26 23:15:29.053074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:50.149 [2024-11-26 23:15:29.053080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:50.149 [2024-11-26 23:15:29.053086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:50.149 [2024-11-26 23:15:29.053092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:50.149 [2024-11-26 23:15:29.053097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:50.149 [2024-11-26 23:15:29.053103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:50.149 [2024-11-26 23:15:29.053110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:50.149 [2024-11-26 23:15:29.053118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.149 [2024-11-26 23:15:29.053128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:50.149 [2024-11-26 23:15:29.053135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:50.149 [2024-11-26 23:15:29.053141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:50.149 [2024-11-26 23:15:29.053147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:50.149 [2024-11-26 23:15:29.053154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:50.150 [2024-11-26 23:15:29.053161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:50.150 [2024-11-26 23:15:29.053167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:50.150 [2024-11-26 23:15:29.053173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:50.150 [2024-11-26 23:15:29.053179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:50.150 [2024-11-26 23:15:29.053186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:50.150 [2024-11-26 23:15:29.053192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:50.150 [2024-11-26 23:15:29.053198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:50.150 [2024-11-26 23:15:29.053215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:50.150 [2024-11-26 23:15:29.053222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:50.150 [2024-11-26 23:15:29.053228] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:50.150 [2024-11-26 23:15:29.053235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.150 [2024-11-26 23:15:29.053243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:50.150 [2024-11-26 23:15:29.053250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:50.150 [2024-11-26 23:15:29.053258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:50.150 [2024-11-26 23:15:29.053265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:50.150 [2024-11-26 23:15:29.053274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.053281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:50.150 [2024-11-26 23:15:29.053288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:30:50.150 [2024-11-26 23:15:29.053308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.061087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.061112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:50.150 [2024-11-26 23:15:29.061119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.738 ms 00:30:50.150 [2024-11-26 23:15:29.061125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.061184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.061191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:50.150 [2024-11-26 23:15:29.061197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:50.150 [2024-11-26 23:15:29.061210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.078814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.078846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:50.150 [2024-11-26 23:15:29.078858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.565 ms 00:30:50.150 [2024-11-26 23:15:29.078866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.078904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.078914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:50.150 [2024-11-26 23:15:29.078922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:50.150 [2024-11-26 23:15:29.078934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.079274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.079311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:50.150 [2024-11-26 23:15:29.079328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:30:50.150 [2024-11-26 23:15:29.079337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.079460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.079475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:50.150 [2024-11-26 23:15:29.079484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:50.150 [2024-11-26 23:15:29.079493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.084634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.084662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:50.150 [2024-11-26 23:15:29.084673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.118 ms 00:30:50.150 [2024-11-26 23:15:29.084689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.087434] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:50.150 [2024-11-26 23:15:29.087469] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:50.150 [2024-11-26 23:15:29.087481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.087490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:50.150 [2024-11-26 23:15:29.087499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:30:50.150 [2024-11-26 23:15:29.087507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.100376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.100407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:50.150 [2024-11-26 23:15:29.100415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.830 ms 00:30:50.150 [2024-11-26 23:15:29.100422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.101940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.101963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:50.150 [2024-11-26 23:15:29.101971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:30:50.150 [2024-11-26 23:15:29.101976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.103156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.103178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:50.150 [2024-11-26 23:15:29.103186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:30:50.150 [2024-11-26 23:15:29.103192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.103444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.103454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:50.150 [2024-11-26 23:15:29.103461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:30:50.150 [2024-11-26 23:15:29.103470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.119420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.119451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:50.150 [2024-11-26 23:15:29.119461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.932 ms 00:30:50.150 [2024-11-26 23:15:29.119467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.125158] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:50.150 [2024-11-26 23:15:29.127201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.127226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:50.150 [2024-11-26 23:15:29.127235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.688 ms 00:30:50.150 [2024-11-26 23:15:29.127242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.127286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.127310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:50.150 [2024-11-26 23:15:29.127318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:50.150 [2024-11-26 23:15:29.127324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.127374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.127382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:50.150 [2024-11-26 23:15:29.127393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:50.150 [2024-11-26 23:15:29.127401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.127416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.127423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:50.150 [2024-11-26 23:15:29.127429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:50.150 [2024-11-26 23:15:29.127434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.127458] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:50.150 [2024-11-26 23:15:29.127467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.127476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:50.150 [2024-11-26 23:15:29.127482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:50.150 [2024-11-26 23:15:29.127490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.130368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.130392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:50.150 [2024-11-26 23:15:29.130401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:30:50.150 [2024-11-26 23:15:29.130407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.130460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.150 [2024-11-26 23:15:29.130468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:50.150 [2024-11-26 23:15:29.130474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:50.150 [2024-11-26 23:15:29.130479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.150 [2024-11-26 23:15:29.131197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 87.723 ms, result 0 00:30:51.094  [2024-11-26T23:15:31.164Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-26T23:15:32.550Z] Copying: 37/1024 [MB] (18 MBps) [2024-11-26T23:15:33.495Z] Copying: 61/1024 [MB] (24 MBps) [2024-11-26T23:15:34.438Z] Copying: 82/1024 [MB] (20 MBps) [2024-11-26T23:15:35.376Z] Copying: 105/1024 [MB] (23 MBps) [2024-11-26T23:15:36.321Z] Copying: 127/1024 [MB] (22 MBps) [2024-11-26T23:15:37.264Z] Copying: 144/1024 [MB] (17 MBps) [2024-11-26T23:15:38.222Z] Copying: 165/1024 [MB] (21 MBps) [2024-11-26T23:15:39.165Z] Copying: 183/1024 [MB] (17 MBps) [2024-11-26T23:15:40.549Z] Copying: 203/1024 [MB] (20 MBps) [2024-11-26T23:15:41.142Z] Copying: 223/1024 [MB] (20 MBps) [2024-11-26T23:15:42.526Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-26T23:15:43.481Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-26T23:15:44.426Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-26T23:15:45.371Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-26T23:15:46.314Z] Copying: 280/1024 [MB] (11 MBps) [2024-11-26T23:15:47.258Z] Copying: 291/1024 [MB] (10 MBps) [2024-11-26T23:15:48.210Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-26T23:15:49.155Z] Copying: 314/1024 [MB] (11 MBps) [2024-11-26T23:15:50.178Z] Copying: 325/1024 [MB] (10 MBps) [2024-11-26T23:15:51.568Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-26T23:15:52.512Z] Copying: 347/1024 [MB] (10 MBps) [2024-11-26T23:15:53.457Z] Copying: 357/1024 [MB] (10 MBps) [2024-11-26T23:15:54.405Z] Copying: 368/1024 [MB] (10 MBps) [2024-11-26T23:15:55.350Z] Copying: 379/1024 [MB] (11 MBps) [2024-11-26T23:15:56.293Z] Copying: 391/1024 [MB] (11 MBps) [2024-11-26T23:15:57.238Z] Copying: 402/1024 [MB] (11 MBps) [2024-11-26T23:15:58.179Z] Copying: 413/1024 [MB] (11 MBps) [2024-11-26T23:15:59.563Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-26T23:16:00.506Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-26T23:16:01.453Z] Copying: 447/1024 [MB] (11 MBps) [2024-11-26T23:16:02.397Z] Copying: 459/1024 [MB] (11 MBps) [2024-11-26T23:16:03.341Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-26T23:16:04.286Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-26T23:16:05.228Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-26T23:16:06.173Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-26T23:16:07.561Z] Copying: 515/1024 [MB] (11 MBps) [2024-11-26T23:16:08.505Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-26T23:16:09.447Z] Copying: 537/1024 [MB] (11 MBps) [2024-11-26T23:16:10.386Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-26T23:16:11.327Z] Copying: 559/1024 [MB] (11 MBps) [2024-11-26T23:16:12.267Z] Copying: 570/1024 [MB] (11 MBps) [2024-11-26T23:16:13.209Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-26T23:16:14.150Z] Copying: 593/1024 [MB] (11 MBps) [2024-11-26T23:16:15.537Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-26T23:16:16.485Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-26T23:16:17.430Z] Copying: 626/1024 [MB] (10 MBps) [2024-11-26T23:16:18.371Z] Copying: 637/1024 [MB] (10 MBps) [2024-11-26T23:16:19.399Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-26T23:16:20.343Z] Copying: 660/1024 [MB] (11 MBps) [2024-11-26T23:16:21.285Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-26T23:16:22.230Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-26T23:16:23.176Z] Copying: 692/1024 [MB] (10 MBps) [2024-11-26T23:16:24.563Z] Copying: 703/1024 [MB] (10 MBps) [2024-11-26T23:16:25.507Z] Copying: 714/1024 [MB] (11 MBps) [2024-11-26T23:16:26.450Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-26T23:16:27.404Z] Copying: 736/1024 [MB] (11 MBps) [2024-11-26T23:16:28.348Z] Copying: 748/1024 [MB] (11 MBps) [2024-11-26T23:16:29.294Z] Copying: 759/1024 [MB] (11 MBps) [2024-11-26T23:16:30.238Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-26T23:16:31.185Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-26T23:16:32.573Z] Copying: 793/1024 [MB] (11 MBps) [2024-11-26T23:16:33.150Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-26T23:16:34.542Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-26T23:16:35.486Z] Copying: 827/1024 [MB] (11 MBps) [2024-11-26T23:16:36.432Z] Copying: 839/1024 [MB] (11 MBps) [2024-11-26T23:16:37.376Z] Copying: 850/1024 [MB] (11 MBps) [2024-11-26T23:16:38.320Z] Copying: 861/1024 [MB] (11 MBps) [2024-11-26T23:16:39.263Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-26T23:16:40.224Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-26T23:16:41.169Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-26T23:16:42.555Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-26T23:16:43.500Z] Copying: 918/1024 [MB] (11 MBps) [2024-11-26T23:16:44.441Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-26T23:16:45.384Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-26T23:16:46.329Z] Copying: 952/1024 [MB] (11 MBps) [2024-11-26T23:16:47.279Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-26T23:16:48.293Z] Copying: 974/1024 [MB] (11 MBps) [2024-11-26T23:16:49.248Z] Copying: 985/1024 [MB] (11 MBps) [2024-11-26T23:16:50.190Z] Copying: 997/1024 [MB] (11 MBps) [2024-11-26T23:16:51.578Z] Copying: 1008/1024 [MB] (11 MBps) [2024-11-26T23:16:51.578Z] Copying: 1019/1024 [MB] (11 MBps) [2024-11-26T23:16:51.578Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-26 23:16:51.514564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.451 [2024-11-26 23:16:51.514681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:12.451 [2024-11-26 23:16:51.514719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.451 [2024-11-26 23:16:51.514782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.451 [2024-11-26 23:16:51.514823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:12.451 [2024-11-26 23:16:51.515627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.451 [2024-11-26 23:16:51.515747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:12.451 [2024-11-26 23:16:51.515803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:32:12.451 [2024-11-26 23:16:51.515826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.451 [2024-11-26 23:16:51.518617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.451 [2024-11-26 23:16:51.518727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:12.451 [2024-11-26 23:16:51.518783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:32:12.451 [2024-11-26 23:16:51.518813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.451 [2024-11-26 23:16:51.518852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.451 [2024-11-26 23:16:51.518874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:12.451 [2024-11-26 23:16:51.518895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.451 [2024-11-26 23:16:51.518913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.451 [2024-11-26 23:16:51.518967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.451 [2024-11-26 23:16:51.519130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:12.451 [2024-11-26 23:16:51.519157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:12.451 [2024-11-26 23:16:51.519176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.451 [2024-11-26 23:16:51.519209] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:12.451 [2024-11-26 23:16:51.519234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.519627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.519683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.519719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.519749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.519819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.519849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.519878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.520977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:12.451 [2024-11-26 23:16:51.521069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:12.452 [2024-11-26 23:16:51.521950] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:12.452 [2024-11-26 23:16:51.521958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7086a201-ebb6-43da-bca8-1ef81f58afda 00:32:12.452 [2024-11-26 23:16:51.521966] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:12.452 [2024-11-26 23:16:51.521973] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:12.452 [2024-11-26 23:16:51.521980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:12.452 [2024-11-26 23:16:51.521987] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:12.452 [2024-11-26 23:16:51.521994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:12.452 [2024-11-26 23:16:51.522002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:12.452 [2024-11-26 23:16:51.522016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:12.452 [2024-11-26 23:16:51.522022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:12.452 [2024-11-26 23:16:51.522028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:12.452 [2024-11-26 23:16:51.522036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.452 [2024-11-26 23:16:51.522044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:12.452 [2024-11-26 23:16:51.522057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:32:12.452 [2024-11-26 23:16:51.522065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.452 [2024-11-26 23:16:51.523561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.452 [2024-11-26 23:16:51.523580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:12.452 [2024-11-26 23:16:51.523593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:32:12.452 [2024-11-26 23:16:51.523601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.452 [2024-11-26 23:16:51.523681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.452 [2024-11-26 23:16:51.523690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:12.452 [2024-11-26 23:16:51.523698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:12.452 [2024-11-26 23:16:51.523705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.452 [2024-11-26 23:16:51.528753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.452 [2024-11-26 23:16:51.528788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:12.452 [2024-11-26 23:16:51.528798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.452 [2024-11-26 23:16:51.528806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.452 [2024-11-26 23:16:51.528858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.452 [2024-11-26 23:16:51.528870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:12.452 [2024-11-26 23:16:51.528878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.452 [2024-11-26 23:16:51.528885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.452 [2024-11-26 23:16:51.528932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.528943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:12.453 [2024-11-26 23:16:51.528951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.528958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.528971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.528979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:12.453 [2024-11-26 23:16:51.528989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.528996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.538333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.538368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:12.453 [2024-11-26 23:16:51.538384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.538392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.545749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.545790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:12.453 [2024-11-26 23:16:51.545800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.545807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.545830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.545837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:12.453 [2024-11-26 23:16:51.545845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.545852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.545920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.545934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:12.453 [2024-11-26 23:16:51.545942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.545953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.546003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.546012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:12.453 [2024-11-26 23:16:51.546019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.546027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.546049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.546061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:12.453 [2024-11-26 23:16:51.546070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.546082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.546117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.546126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:12.453 [2024-11-26 23:16:51.546133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.546141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.546183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.453 [2024-11-26 23:16:51.546194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:12.453 [2024-11-26 23:16:51.546205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.453 [2024-11-26 23:16:51.546214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.453 [2024-11-26 23:16:51.546343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 31.738 ms, result 0 00:32:13.026 00:32:13.026 00:32:13.026 23:16:51 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:13.026 [2024-11-26 23:16:52.052502] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:32:13.026 [2024-11-26 23:16:52.052668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97697 ] 00:32:13.286 [2024-11-26 23:16:52.192487] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:13.286 [2024-11-26 23:16:52.223549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.286 [2024-11-26 23:16:52.263555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.548 [2024-11-26 23:16:52.412761] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.549 [2024-11-26 23:16:52.412871] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.549 [2024-11-26 23:16:52.577389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.577450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:13.549 [2024-11-26 23:16:52.577479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:13.549 [2024-11-26 23:16:52.577498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.577588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.577607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.549 [2024-11-26 23:16:52.577621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:13.549 [2024-11-26 23:16:52.577637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.577670] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:13.549 [2024-11-26 23:16:52.578066] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:13.549 [2024-11-26 23:16:52.578108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.578131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.549 [2024-11-26 23:16:52.578146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:32:13.549 [2024-11-26 23:16:52.578158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.579087] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:13.549 [2024-11-26 23:16:52.579162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.579189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:13.549 [2024-11-26 23:16:52.579210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:32:13.549 [2024-11-26 23:16:52.579228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.579328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.579346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:13.549 [2024-11-26 23:16:52.579368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:13.549 [2024-11-26 23:16:52.579382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.579780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.579814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.549 [2024-11-26 23:16:52.579828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:32:13.549 [2024-11-26 23:16:52.579849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.579981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.579996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.549 [2024-11-26 23:16:52.580010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:32:13.549 [2024-11-26 23:16:52.580023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.580064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.580079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:13.549 [2024-11-26 23:16:52.580093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:13.549 [2024-11-26 23:16:52.580106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.580143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:13.549 [2024-11-26 23:16:52.582827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.582876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.549 [2024-11-26 23:16:52.582893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.693 ms 00:32:13.549 [2024-11-26 23:16:52.582905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.582957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.582978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:13.549 [2024-11-26 23:16:52.582993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:13.549 [2024-11-26 23:16:52.583005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.583087] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:13.549 [2024-11-26 23:16:52.583123] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:13.549 [2024-11-26 23:16:52.583182] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:13.549 [2024-11-26 23:16:52.583214] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:13.549 [2024-11-26 23:16:52.583384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:13.549 [2024-11-26 23:16:52.583406] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:13.549 [2024-11-26 23:16:52.583434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:13.549 [2024-11-26 23:16:52.583456] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:13.549 [2024-11-26 23:16:52.583471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:13.549 [2024-11-26 23:16:52.583485] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:13.549 [2024-11-26 23:16:52.583499] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:13.549 [2024-11-26 23:16:52.583511] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:13.549 [2024-11-26 23:16:52.583523] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:13.549 [2024-11-26 23:16:52.583535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.583548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:13.549 [2024-11-26 23:16:52.583562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:32:13.549 [2024-11-26 23:16:52.583574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.583697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.549 [2024-11-26 23:16:52.583729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:13.549 [2024-11-26 23:16:52.583744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:32:13.549 [2024-11-26 23:16:52.583755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.549 [2024-11-26 23:16:52.583895] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:13.549 [2024-11-26 23:16:52.583924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:13.549 [2024-11-26 23:16:52.583939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.549 [2024-11-26 23:16:52.583957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.549 [2024-11-26 23:16:52.583970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:13.549 [2024-11-26 23:16:52.583982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:13.549 [2024-11-26 23:16:52.583995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:13.549 [2024-11-26 23:16:52.584006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:13.549 [2024-11-26 23:16:52.584026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:13.549 [2024-11-26 23:16:52.584037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.549 [2024-11-26 23:16:52.584049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:13.549 [2024-11-26 23:16:52.584060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:13.549 [2024-11-26 23:16:52.584071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.549 [2024-11-26 23:16:52.584083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:13.549 [2024-11-26 23:16:52.584095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:13.549 [2024-11-26 23:16:52.584107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.549 [2024-11-26 23:16:52.584118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:13.549 [2024-11-26 23:16:52.584133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:13.550 [2024-11-26 23:16:52.584145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:13.550 [2024-11-26 23:16:52.584170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.550 [2024-11-26 23:16:52.584193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:13.550 [2024-11-26 23:16:52.584204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.550 [2024-11-26 23:16:52.584227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:13.550 [2024-11-26 23:16:52.584239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.550 [2024-11-26 23:16:52.584260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:13.550 [2024-11-26 23:16:52.584271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.550 [2024-11-26 23:16:52.584308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:13.550 [2024-11-26 23:16:52.584320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.550 [2024-11-26 23:16:52.584354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:13.550 [2024-11-26 23:16:52.584365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:13.550 [2024-11-26 23:16:52.584376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.550 [2024-11-26 23:16:52.584387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:13.550 [2024-11-26 23:16:52.584401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:13.550 [2024-11-26 23:16:52.584414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:13.550 [2024-11-26 23:16:52.584439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:13.550 [2024-11-26 23:16:52.584451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584462] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:13.550 [2024-11-26 23:16:52.584475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:13.550 [2024-11-26 23:16:52.584493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.550 [2024-11-26 23:16:52.584506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.550 [2024-11-26 23:16:52.584519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:13.550 [2024-11-26 23:16:52.584530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:13.550 [2024-11-26 23:16:52.584545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:13.550 [2024-11-26 23:16:52.584558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:13.550 [2024-11-26 23:16:52.584570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:13.550 [2024-11-26 23:16:52.584582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:13.550 [2024-11-26 23:16:52.584597] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:13.550 [2024-11-26 23:16:52.584613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.550 [2024-11-26 23:16:52.584628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:13.550 [2024-11-26 23:16:52.584641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:13.550 [2024-11-26 23:16:52.584654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:13.550 [2024-11-26 23:16:52.584666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:13.550 [2024-11-26 23:16:52.584679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:13.550 [2024-11-26 23:16:52.584693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:13.550 [2024-11-26 23:16:52.584706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:13.550 [2024-11-26 23:16:52.584718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:13.550 [2024-11-26 23:16:52.584729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:13.550 [2024-11-26 23:16:52.584742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:13.550 [2024-11-26 23:16:52.584757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:13.550 [2024-11-26 23:16:52.584770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:13.550 [2024-11-26 23:16:52.584782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:13.550 [2024-11-26 23:16:52.584794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:13.550 [2024-11-26 23:16:52.584816] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:13.550 [2024-11-26 23:16:52.584833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.550 [2024-11-26 23:16:52.584849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:13.550 [2024-11-26 23:16:52.584862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:13.550 [2024-11-26 23:16:52.584876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:13.550 [2024-11-26 23:16:52.584889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:13.550 [2024-11-26 23:16:52.584903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.550 [2024-11-26 23:16:52.584918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:13.550 [2024-11-26 23:16:52.584932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:32:13.550 [2024-11-26 23:16:52.584950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.550 [2024-11-26 23:16:52.597684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.550 [2024-11-26 23:16:52.597732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.550 [2024-11-26 23:16:52.597750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.662 ms 00:32:13.550 [2024-11-26 23:16:52.597769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.550 [2024-11-26 23:16:52.597881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.550 [2024-11-26 23:16:52.597897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:13.550 [2024-11-26 23:16:52.597911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:32:13.550 [2024-11-26 23:16:52.597928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.550 [2024-11-26 23:16:52.622094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.550 [2024-11-26 23:16:52.622164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:13.550 [2024-11-26 23:16:52.622188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.086 ms 00:32:13.550 [2024-11-26 23:16:52.622205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.550 [2024-11-26 23:16:52.622275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.550 [2024-11-26 23:16:52.622314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.550 [2024-11-26 23:16:52.622335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:13.550 [2024-11-26 23:16:52.622351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.550 [2024-11-26 23:16:52.622538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.550 [2024-11-26 23:16:52.622581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.550 [2024-11-26 23:16:52.622601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:32:13.551 [2024-11-26 23:16:52.622618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.622853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.622888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.551 [2024-11-26 23:16:52.622907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:32:13.551 [2024-11-26 23:16:52.622930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.633925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.633985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.551 [2024-11-26 23:16:52.634002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.957 ms 00:32:13.551 [2024-11-26 23:16:52.634014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.634207] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:13.551 [2024-11-26 23:16:52.634228] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:13.551 [2024-11-26 23:16:52.634244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.634258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:13.551 [2024-11-26 23:16:52.634276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:32:13.551 [2024-11-26 23:16:52.634294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.646846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.646894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:13.551 [2024-11-26 23:16:52.646911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.500 ms 00:32:13.551 [2024-11-26 23:16:52.646922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.647103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.647119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:13.551 [2024-11-26 23:16:52.647148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:32:13.551 [2024-11-26 23:16:52.647161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.647240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.647272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:13.551 [2024-11-26 23:16:52.647324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:13.551 [2024-11-26 23:16:52.647341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.647722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.647756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:13.551 [2024-11-26 23:16:52.647769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:32:13.551 [2024-11-26 23:16:52.647781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.647811] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:13.551 [2024-11-26 23:16:52.647826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.647844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:13.551 [2024-11-26 23:16:52.647858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:13.551 [2024-11-26 23:16:52.647869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.658482] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:13.551 [2024-11-26 23:16:52.658696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.658732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:13.551 [2024-11-26 23:16:52.658745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.795 ms 00:32:13.551 [2024-11-26 23:16:52.658757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.661476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.661529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:13.551 [2024-11-26 23:16:52.661541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:32:13.551 [2024-11-26 23:16:52.661550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.661657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.661673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:13.551 [2024-11-26 23:16:52.661683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:13.551 [2024-11-26 23:16:52.661697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.661723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.661738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:13.551 [2024-11-26 23:16:52.661751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:13.551 [2024-11-26 23:16:52.661759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.661799] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:13.551 [2024-11-26 23:16:52.661810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.661818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:13.551 [2024-11-26 23:16:52.661825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:13.551 [2024-11-26 23:16:52.661832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.668942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.668996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:13.551 [2024-11-26 23:16:52.669009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.087 ms 00:32:13.551 [2024-11-26 23:16:52.669019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.669122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.551 [2024-11-26 23:16:52.669136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:13.551 [2024-11-26 23:16:52.669146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:13.551 [2024-11-26 23:16:52.669154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.551 [2024-11-26 23:16:52.671160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 93.272 ms, result 0 00:32:14.939  [2024-11-26T23:16:55.012Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-26T23:16:55.959Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-26T23:16:56.900Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-26T23:16:58.285Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-26T23:16:59.231Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-26T23:17:00.175Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-26T23:17:01.118Z] Copying: 74/1024 [MB] (10 MBps) [2024-11-26T23:17:02.058Z] Copying: 85/1024 [MB] (10 MBps) [2024-11-26T23:17:03.002Z] Copying: 95/1024 [MB] (10 MBps) [2024-11-26T23:17:03.943Z] Copying: 105/1024 [MB] (10 MBps) [2024-11-26T23:17:04.888Z] Copying: 115/1024 [MB] (10 MBps) [2024-11-26T23:17:06.275Z] Copying: 126/1024 [MB] (10 MBps) [2024-11-26T23:17:07.217Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-26T23:17:08.160Z] Copying: 150/1024 [MB] (14 MBps) [2024-11-26T23:17:09.103Z] Copying: 163/1024 [MB] (12 MBps) [2024-11-26T23:17:10.042Z] Copying: 183/1024 [MB] (20 MBps) [2024-11-26T23:17:10.984Z] Copying: 195/1024 [MB] (11 MBps) [2024-11-26T23:17:11.950Z] Copying: 210/1024 [MB] (15 MBps) [2024-11-26T23:17:12.893Z] Copying: 235/1024 [MB] (25 MBps) [2024-11-26T23:17:14.277Z] Copying: 255/1024 [MB] (19 MBps) [2024-11-26T23:17:15.237Z] Copying: 272/1024 [MB] (16 MBps) [2024-11-26T23:17:16.181Z] Copying: 286/1024 [MB] (14 MBps) [2024-11-26T23:17:17.147Z] Copying: 309/1024 [MB] (22 MBps) [2024-11-26T23:17:18.089Z] Copying: 334/1024 [MB] (25 MBps) [2024-11-26T23:17:19.032Z] Copying: 351/1024 [MB] (17 MBps) [2024-11-26T23:17:19.972Z] Copying: 370/1024 [MB] (18 MBps) [2024-11-26T23:17:20.916Z] Copying: 394/1024 [MB] (23 MBps) [2024-11-26T23:17:22.300Z] Copying: 411/1024 [MB] (17 MBps) [2024-11-26T23:17:22.873Z] Copying: 430/1024 [MB] (18 MBps) [2024-11-26T23:17:24.265Z] Copying: 446/1024 [MB] (16 MBps) [2024-11-26T23:17:25.211Z] Copying: 459/1024 [MB] (13 MBps) [2024-11-26T23:17:26.151Z] Copying: 480/1024 [MB] (21 MBps) [2024-11-26T23:17:27.097Z] Copying: 502/1024 [MB] (22 MBps) [2024-11-26T23:17:28.047Z] Copying: 520/1024 [MB] (17 MBps) [2024-11-26T23:17:28.992Z] Copying: 540/1024 [MB] (19 MBps) [2024-11-26T23:17:29.949Z] Copying: 556/1024 [MB] (15 MBps) [2024-11-26T23:17:30.896Z] Copying: 569/1024 [MB] (13 MBps) [2024-11-26T23:17:32.295Z] Copying: 585/1024 [MB] (15 MBps) [2024-11-26T23:17:32.869Z] Copying: 603/1024 [MB] (18 MBps) [2024-11-26T23:17:34.257Z] Copying: 624/1024 [MB] (21 MBps) [2024-11-26T23:17:35.202Z] Copying: 645/1024 [MB] (20 MBps) [2024-11-26T23:17:36.146Z] Copying: 656/1024 [MB] (10 MBps) [2024-11-26T23:17:37.092Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-26T23:17:38.036Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-26T23:17:38.980Z] Copying: 691/1024 [MB] (14 MBps) [2024-11-26T23:17:39.920Z] Copying: 702/1024 [MB] (10 MBps) [2024-11-26T23:17:41.306Z] Copying: 718/1024 [MB] (16 MBps) [2024-11-26T23:17:41.880Z] Copying: 734/1024 [MB] (15 MBps) [2024-11-26T23:17:43.266Z] Copying: 752/1024 [MB] (18 MBps) [2024-11-26T23:17:44.210Z] Copying: 771/1024 [MB] (19 MBps) [2024-11-26T23:17:45.154Z] Copying: 787/1024 [MB] (15 MBps) [2024-11-26T23:17:46.157Z] Copying: 802/1024 [MB] (15 MBps) [2024-11-26T23:17:47.103Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-26T23:17:48.048Z] Copying: 828/1024 [MB] (14 MBps) [2024-11-26T23:17:48.993Z] Copying: 843/1024 [MB] (15 MBps) [2024-11-26T23:17:49.937Z] Copying: 858/1024 [MB] (15 MBps) [2024-11-26T23:17:50.877Z] Copying: 870/1024 [MB] (12 MBps) [2024-11-26T23:17:52.265Z] Copying: 883/1024 [MB] (12 MBps) [2024-11-26T23:17:53.207Z] Copying: 898/1024 [MB] (14 MBps) [2024-11-26T23:17:54.161Z] Copying: 913/1024 [MB] (15 MBps) [2024-11-26T23:17:55.109Z] Copying: 927/1024 [MB] (14 MBps) [2024-11-26T23:17:56.053Z] Copying: 938/1024 [MB] (10 MBps) [2024-11-26T23:17:56.995Z] Copying: 950/1024 [MB] (12 MBps) [2024-11-26T23:17:57.939Z] Copying: 973/1024 [MB] (23 MBps) [2024-11-26T23:17:58.885Z] Copying: 993/1024 [MB] (19 MBps) [2024-11-26T23:17:59.834Z] Copying: 1015/1024 [MB] (22 MBps) [2024-11-26T23:17:59.834Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-26 23:17:59.825832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.707 [2024-11-26 23:17:59.825895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:20.707 [2024-11-26 23:17:59.825911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:20.707 [2024-11-26 23:17:59.825920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.707 [2024-11-26 23:17:59.825947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:20.707 [2024-11-26 23:17:59.826534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.707 [2024-11-26 23:17:59.826559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:20.707 [2024-11-26 23:17:59.826571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:33:20.707 [2024-11-26 23:17:59.826580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.707 [2024-11-26 23:17:59.826798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.707 [2024-11-26 23:17:59.826809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:20.707 [2024-11-26 23:17:59.826818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:33:20.707 [2024-11-26 23:17:59.826827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.707 [2024-11-26 23:17:59.826860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.707 [2024-11-26 23:17:59.826870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:20.707 [2024-11-26 23:17:59.826885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:20.707 [2024-11-26 23:17:59.826893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.707 [2024-11-26 23:17:59.826961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.707 [2024-11-26 23:17:59.826980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:20.707 [2024-11-26 23:17:59.826989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:20.707 [2024-11-26 23:17:59.826997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.707 [2024-11-26 23:17:59.827012] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:20.707 [2024-11-26 23:17:59.827028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:20.707 [2024-11-26 23:17:59.827154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:20.708 [2024-11-26 23:17:59.827851] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:20.708 [2024-11-26 23:17:59.827859] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7086a201-ebb6-43da-bca8-1ef81f58afda 00:33:20.708 [2024-11-26 23:17:59.827866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:20.708 [2024-11-26 23:17:59.827875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:20.708 [2024-11-26 23:17:59.827882] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:20.708 [2024-11-26 23:17:59.827893] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:20.708 [2024-11-26 23:17:59.827900] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:20.708 [2024-11-26 23:17:59.827908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:20.708 [2024-11-26 23:17:59.827919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:20.708 [2024-11-26 23:17:59.827926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:20.708 [2024-11-26 23:17:59.827937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:20.708 [2024-11-26 23:17:59.827945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.708 [2024-11-26 23:17:59.827961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:20.708 [2024-11-26 23:17:59.827972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:33:20.709 [2024-11-26 23:17:59.827980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.709 [2024-11-26 23:17:59.829974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.709 [2024-11-26 23:17:59.830008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:20.709 [2024-11-26 23:17:59.830018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:33:20.709 [2024-11-26 23:17:59.830025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.709 [2024-11-26 23:17:59.830138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.709 [2024-11-26 23:17:59.830151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:20.709 [2024-11-26 23:17:59.830161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:33:20.709 [2024-11-26 23:17:59.830170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.970 [2024-11-26 23:17:59.836792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.836829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:20.971 [2024-11-26 23:17:59.836845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.836854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.836913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.836927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:20.971 [2024-11-26 23:17:59.836935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.836944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.836994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.837004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:20.971 [2024-11-26 23:17:59.837013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.837021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.837069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.837080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:20.971 [2024-11-26 23:17:59.837092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.837099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.849322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.849365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:20.971 [2024-11-26 23:17:59.849385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.849393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.859554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.859598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:20.971 [2024-11-26 23:17:59.859616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.859625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.859674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.859683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:20.971 [2024-11-26 23:17:59.859691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.859699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.859741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.859753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:20.971 [2024-11-26 23:17:59.859762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.859777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.859830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.859841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:20.971 [2024-11-26 23:17:59.859856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.859864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.859892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.859903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:20.971 [2024-11-26 23:17:59.859913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.859922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.859963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.859973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:20.971 [2024-11-26 23:17:59.859985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.859995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.860037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.971 [2024-11-26 23:17:59.860049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:20.971 [2024-11-26 23:17:59.860062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.971 [2024-11-26 23:17:59.860075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.971 [2024-11-26 23:17:59.860202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 34.345 ms, result 0 00:33:21.232 00:33:21.232 00:33:21.232 23:18:00 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:23.156 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:23.156 23:18:02 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:23.156 [2024-11-26 23:18:02.165345] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:33:23.156 [2024-11-26 23:18:02.165448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98419 ] 00:33:23.418 [2024-11-26 23:18:02.292934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:23.418 [2024-11-26 23:18:02.323961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.418 [2024-11-26 23:18:02.352017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.418 [2024-11-26 23:18:02.502395] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:23.418 [2024-11-26 23:18:02.502513] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:23.682 [2024-11-26 23:18:02.666348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.666426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:23.682 [2024-11-26 23:18:02.666452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:23.682 [2024-11-26 23:18:02.666465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.666556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.666581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:23.682 [2024-11-26 23:18:02.666596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:33:23.682 [2024-11-26 23:18:02.666613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.666652] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:23.682 [2024-11-26 23:18:02.667028] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:23.682 [2024-11-26 23:18:02.667068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.667086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:23.682 [2024-11-26 23:18:02.667101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:33:23.682 [2024-11-26 23:18:02.667114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.667596] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:23.682 [2024-11-26 23:18:02.667649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.667665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:23.682 [2024-11-26 23:18:02.667692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:23.682 [2024-11-26 23:18:02.667710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.667798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.667813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:23.682 [2024-11-26 23:18:02.667832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:23.682 [2024-11-26 23:18:02.667846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.668232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.668265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:23.682 [2024-11-26 23:18:02.668280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:33:23.682 [2024-11-26 23:18:02.668313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.668447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.668462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:23.682 [2024-11-26 23:18:02.668482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:33:23.682 [2024-11-26 23:18:02.668494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.668538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.668562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:23.682 [2024-11-26 23:18:02.668576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:23.682 [2024-11-26 23:18:02.668589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.668632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:23.682 [2024-11-26 23:18:02.671565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.682 [2024-11-26 23:18:02.671610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:23.682 [2024-11-26 23:18:02.671626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:33:23.682 [2024-11-26 23:18:02.671639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.682 [2024-11-26 23:18:02.671691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.683 [2024-11-26 23:18:02.671705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:23.683 [2024-11-26 23:18:02.671719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:23.683 [2024-11-26 23:18:02.671731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.683 [2024-11-26 23:18:02.671815] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:23.683 [2024-11-26 23:18:02.671852] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:23.683 [2024-11-26 23:18:02.671922] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:23.683 [2024-11-26 23:18:02.671948] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:23.683 [2024-11-26 23:18:02.672104] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:23.683 [2024-11-26 23:18:02.672141] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:23.683 [2024-11-26 23:18:02.672166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:23.683 [2024-11-26 23:18:02.672189] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:23.683 [2024-11-26 23:18:02.672205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:23.683 [2024-11-26 23:18:02.672228] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:23.683 [2024-11-26 23:18:02.672242] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:23.683 [2024-11-26 23:18:02.672256] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:23.683 [2024-11-26 23:18:02.672272] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:23.683 [2024-11-26 23:18:02.672289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.683 [2024-11-26 23:18:02.672323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:23.683 [2024-11-26 23:18:02.672336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:33:23.683 [2024-11-26 23:18:02.672351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.683 [2024-11-26 23:18:02.672479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.683 [2024-11-26 23:18:02.672544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:23.683 [2024-11-26 23:18:02.672560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:33:23.683 [2024-11-26 23:18:02.672579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.683 [2024-11-26 23:18:02.672722] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:23.683 [2024-11-26 23:18:02.672751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:23.683 [2024-11-26 23:18:02.672767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:23.683 [2024-11-26 23:18:02.672781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:23.683 [2024-11-26 23:18:02.672795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:23.683 [2024-11-26 23:18:02.672807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:23.683 [2024-11-26 23:18:02.672819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:23.683 [2024-11-26 23:18:02.672830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:23.683 [2024-11-26 23:18:02.672853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:23.683 [2024-11-26 23:18:02.672865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:23.683 [2024-11-26 23:18:02.672876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:23.683 [2024-11-26 23:18:02.672889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:23.683 [2024-11-26 23:18:02.672900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:23.683 [2024-11-26 23:18:02.672912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:23.683 [2024-11-26 23:18:02.672923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:23.683 [2024-11-26 23:18:02.672935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:23.683 [2024-11-26 23:18:02.672946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:23.683 [2024-11-26 23:18:02.672962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:23.683 [2024-11-26 23:18:02.672974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:23.683 [2024-11-26 23:18:02.672985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:23.683 [2024-11-26 23:18:02.672997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:23.683 [2024-11-26 23:18:02.673018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:23.683 [2024-11-26 23:18:02.673044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:23.683 [2024-11-26 23:18:02.673068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:23.683 [2024-11-26 23:18:02.673080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:23.683 [2024-11-26 23:18:02.673104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:23.683 [2024-11-26 23:18:02.673116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:23.683 [2024-11-26 23:18:02.673140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:23.683 [2024-11-26 23:18:02.673152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:23.683 [2024-11-26 23:18:02.673187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:23.683 [2024-11-26 23:18:02.673199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:23.683 [2024-11-26 23:18:02.673209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:23.683 [2024-11-26 23:18:02.673220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:23.683 [2024-11-26 23:18:02.673232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:23.683 [2024-11-26 23:18:02.673245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:23.683 [2024-11-26 23:18:02.673270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:23.683 [2024-11-26 23:18:02.673283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673315] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:23.683 [2024-11-26 23:18:02.673338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:23.683 [2024-11-26 23:18:02.673356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:23.683 [2024-11-26 23:18:02.673370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:23.683 [2024-11-26 23:18:02.673382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:23.683 [2024-11-26 23:18:02.673395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:23.683 [2024-11-26 23:18:02.673410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:23.683 [2024-11-26 23:18:02.673422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:23.683 [2024-11-26 23:18:02.673435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:23.683 [2024-11-26 23:18:02.673448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:23.684 [2024-11-26 23:18:02.673463] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:23.684 [2024-11-26 23:18:02.673480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:23.684 [2024-11-26 23:18:02.673494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:23.684 [2024-11-26 23:18:02.673509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:23.684 [2024-11-26 23:18:02.673524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:23.684 [2024-11-26 23:18:02.673536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:23.684 [2024-11-26 23:18:02.673549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:23.684 [2024-11-26 23:18:02.673563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:23.684 [2024-11-26 23:18:02.673577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:23.684 [2024-11-26 23:18:02.673590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:23.684 [2024-11-26 23:18:02.673604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:23.684 [2024-11-26 23:18:02.673617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:23.684 [2024-11-26 23:18:02.673634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:23.684 [2024-11-26 23:18:02.673648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:23.684 [2024-11-26 23:18:02.673661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:23.684 [2024-11-26 23:18:02.673674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:23.684 [2024-11-26 23:18:02.673689] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:23.684 [2024-11-26 23:18:02.673703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:23.684 [2024-11-26 23:18:02.673719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:23.684 [2024-11-26 23:18:02.673736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:23.684 [2024-11-26 23:18:02.673749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:23.684 [2024-11-26 23:18:02.673761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:23.684 [2024-11-26 23:18:02.673775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.673789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:23.684 [2024-11-26 23:18:02.673803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:33:23.684 [2024-11-26 23:18:02.673816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.688450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.688507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:23.684 [2024-11-26 23:18:02.688526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.558 ms 00:33:23.684 [2024-11-26 23:18:02.688541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.688652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.688666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:23.684 [2024-11-26 23:18:02.688681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:23.684 [2024-11-26 23:18:02.688701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.713532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.713598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:23.684 [2024-11-26 23:18:02.713617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.746 ms 00:33:23.684 [2024-11-26 23:18:02.713632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.713697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.713716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:23.684 [2024-11-26 23:18:02.713732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:23.684 [2024-11-26 23:18:02.713748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.713917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.713937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:23.684 [2024-11-26 23:18:02.713952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:33:23.684 [2024-11-26 23:18:02.713967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.714173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.714209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:23.684 [2024-11-26 23:18:02.714224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:33:23.684 [2024-11-26 23:18:02.714237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.725753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.725812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:23.684 [2024-11-26 23:18:02.725828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.482 ms 00:33:23.684 [2024-11-26 23:18:02.725841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.726055] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:23.684 [2024-11-26 23:18:02.726090] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:23.684 [2024-11-26 23:18:02.726108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.726129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:23.684 [2024-11-26 23:18:02.726148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:23.684 [2024-11-26 23:18:02.726161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.738576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.738627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:23.684 [2024-11-26 23:18:02.738653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.381 ms 00:33:23.684 [2024-11-26 23:18:02.738665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.738856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.738880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:23.684 [2024-11-26 23:18:02.738908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:33:23.684 [2024-11-26 23:18:02.738921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.739005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.739038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:23.684 [2024-11-26 23:18:02.739058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:23.684 [2024-11-26 23:18:02.739070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.684 [2024-11-26 23:18:02.739508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.684 [2024-11-26 23:18:02.739551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:23.684 [2024-11-26 23:18:02.739567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:33:23.684 [2024-11-26 23:18:02.739579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.739609] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:23.685 [2024-11-26 23:18:02.739625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.739643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:23.685 [2024-11-26 23:18:02.739655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:23.685 [2024-11-26 23:18:02.739668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.750480] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:23.685 [2024-11-26 23:18:02.750704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.750733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:23.685 [2024-11-26 23:18:02.750749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.006 ms 00:33:23.685 [2024-11-26 23:18:02.750767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.753383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.753434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:23.685 [2024-11-26 23:18:02.753455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:33:23.685 [2024-11-26 23:18:02.753466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.753610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.753629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:23.685 [2024-11-26 23:18:02.753652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:23.685 [2024-11-26 23:18:02.753674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.753723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.753739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:23.685 [2024-11-26 23:18:02.753756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:23.685 [2024-11-26 23:18:02.753769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.753826] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:23.685 [2024-11-26 23:18:02.753843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.753856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:23.685 [2024-11-26 23:18:02.753870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:23.685 [2024-11-26 23:18:02.753883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.761449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.761514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:23.685 [2024-11-26 23:18:02.761532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.528 ms 00:33:23.685 [2024-11-26 23:18:02.761545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.761673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.685 [2024-11-26 23:18:02.761696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:23.685 [2024-11-26 23:18:02.761712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:33:23.685 [2024-11-26 23:18:02.761726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.685 [2024-11-26 23:18:02.763174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 96.348 ms, result 0 00:33:25.073  [2024-11-26T23:18:05.145Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-26T23:18:06.106Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-26T23:18:07.050Z] Copying: 46/1024 [MB] (15 MBps) [2024-11-26T23:18:07.992Z] Copying: 60/1024 [MB] (13 MBps) [2024-11-26T23:18:08.937Z] Copying: 82/1024 [MB] (22 MBps) [2024-11-26T23:18:09.879Z] Copying: 101/1024 [MB] (18 MBps) [2024-11-26T23:18:10.818Z] Copying: 121/1024 [MB] (20 MBps) [2024-11-26T23:18:12.204Z] Copying: 137/1024 [MB] (16 MBps) [2024-11-26T23:18:12.778Z] Copying: 161/1024 [MB] (23 MBps) [2024-11-26T23:18:14.172Z] Copying: 176/1024 [MB] (14 MBps) [2024-11-26T23:18:14.822Z] Copying: 190/1024 [MB] (14 MBps) [2024-11-26T23:18:16.211Z] Copying: 206/1024 [MB] (15 MBps) [2024-11-26T23:18:16.791Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-26T23:18:18.180Z] Copying: 235/1024 [MB] (19 MBps) [2024-11-26T23:18:19.123Z] Copying: 250/1024 [MB] (14 MBps) [2024-11-26T23:18:20.066Z] Copying: 264/1024 [MB] (13 MBps) [2024-11-26T23:18:21.010Z] Copying: 284/1024 [MB] (19 MBps) [2024-11-26T23:18:21.957Z] Copying: 304/1024 [MB] (20 MBps) [2024-11-26T23:18:22.905Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-26T23:18:23.843Z] Copying: 331956/1048576 [kB] (10096 kBps) [2024-11-26T23:18:24.784Z] Copying: 353/1024 [MB] (29 MBps) [2024-11-26T23:18:26.171Z] Copying: 399/1024 [MB] (46 MBps) [2024-11-26T23:18:27.117Z] Copying: 418/1024 [MB] (18 MBps) [2024-11-26T23:18:28.076Z] Copying: 432/1024 [MB] (14 MBps) [2024-11-26T23:18:29.016Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-26T23:18:29.972Z] Copying: 468/1024 [MB] (25 MBps) [2024-11-26T23:18:30.917Z] Copying: 487/1024 [MB] (18 MBps) [2024-11-26T23:18:31.867Z] Copying: 501/1024 [MB] (13 MBps) [2024-11-26T23:18:32.811Z] Copying: 520/1024 [MB] (19 MBps) [2024-11-26T23:18:34.189Z] Copying: 534/1024 [MB] (14 MBps) [2024-11-26T23:18:35.133Z] Copying: 555/1024 [MB] (20 MBps) [2024-11-26T23:18:36.075Z] Copying: 569/1024 [MB] (14 MBps) [2024-11-26T23:18:37.021Z] Copying: 583/1024 [MB] (13 MBps) [2024-11-26T23:18:37.967Z] Copying: 596/1024 [MB] (13 MBps) [2024-11-26T23:18:38.908Z] Copying: 609/1024 [MB] (12 MBps) [2024-11-26T23:18:39.843Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-26T23:18:40.789Z] Copying: 658/1024 [MB] (38 MBps) [2024-11-26T23:18:42.177Z] Copying: 673/1024 [MB] (14 MBps) [2024-11-26T23:18:43.130Z] Copying: 689/1024 [MB] (16 MBps) [2024-11-26T23:18:43.785Z] Copying: 700/1024 [MB] (11 MBps) [2024-11-26T23:18:45.172Z] Copying: 714/1024 [MB] (14 MBps) [2024-11-26T23:18:46.114Z] Copying: 727/1024 [MB] (12 MBps) [2024-11-26T23:18:47.060Z] Copying: 742/1024 [MB] (14 MBps) [2024-11-26T23:18:48.004Z] Copying: 754/1024 [MB] (12 MBps) [2024-11-26T23:18:48.970Z] Copying: 765/1024 [MB] (10 MBps) [2024-11-26T23:18:49.912Z] Copying: 775/1024 [MB] (10 MBps) [2024-11-26T23:18:50.854Z] Copying: 787/1024 [MB] (12 MBps) [2024-11-26T23:18:51.795Z] Copying: 798/1024 [MB] (11 MBps) [2024-11-26T23:18:53.173Z] Copying: 811/1024 [MB] (12 MBps) [2024-11-26T23:18:54.105Z] Copying: 828/1024 [MB] (17 MBps) [2024-11-26T23:18:55.036Z] Copying: 859/1024 [MB] (30 MBps) [2024-11-26T23:18:55.972Z] Copying: 891/1024 [MB] (31 MBps) [2024-11-26T23:18:56.915Z] Copying: 926/1024 [MB] (35 MBps) [2024-11-26T23:18:57.864Z] Copying: 939/1024 [MB] (12 MBps) [2024-11-26T23:18:58.807Z] Copying: 951/1024 [MB] (12 MBps) [2024-11-26T23:19:00.181Z] Copying: 967/1024 [MB] (15 MBps) [2024-11-26T23:19:01.116Z] Copying: 1011/1024 [MB] (44 MBps) [2024-11-26T23:19:01.116Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-26T23:19:01.116Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-26 23:19:00.972238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.989 [2024-11-26 23:19:00.972408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:21.989 [2024-11-26 23:19:00.972428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:21.989 [2024-11-26 23:19:00.972436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.989 [2024-11-26 23:19:00.974699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:21.989 [2024-11-26 23:19:00.976420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.989 [2024-11-26 23:19:00.976443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:21.989 [2024-11-26 23:19:00.976452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:34:21.989 [2024-11-26 23:19:00.976464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.989 [2024-11-26 23:19:00.983934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.989 [2024-11-26 23:19:00.983960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:21.989 [2024-11-26 23:19:00.983968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.786 ms 00:34:21.989 [2024-11-26 23:19:00.983974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.989 [2024-11-26 23:19:00.983998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.989 [2024-11-26 23:19:00.984006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:21.989 [2024-11-26 23:19:00.984018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:21.989 [2024-11-26 23:19:00.984025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.989 [2024-11-26 23:19:00.984071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.989 [2024-11-26 23:19:00.984078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:21.989 [2024-11-26 23:19:00.984085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:34:21.989 [2024-11-26 23:19:00.984094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.989 [2024-11-26 23:19:00.984105] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:21.989 [2024-11-26 23:19:00.984115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125184 / 261120 wr_cnt: 1 state: open 00:34:21.989 [2024-11-26 23:19:00.984126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:21.989 [2024-11-26 23:19:00.984391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:21.990 [2024-11-26 23:19:00.984779] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:21.990 [2024-11-26 23:19:00.984786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7086a201-ebb6-43da-bca8-1ef81f58afda 00:34:21.990 [2024-11-26 23:19:00.984792] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125184 00:34:21.990 [2024-11-26 23:19:00.984797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125216 00:34:21.990 [2024-11-26 23:19:00.984803] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125184 00:34:21.990 [2024-11-26 23:19:00.984811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:21.990 [2024-11-26 23:19:00.984817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:21.990 [2024-11-26 23:19:00.984824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:21.990 [2024-11-26 23:19:00.984830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:21.990 [2024-11-26 23:19:00.984835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:21.990 [2024-11-26 23:19:00.984840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:21.990 [2024-11-26 23:19:00.984846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.990 [2024-11-26 23:19:00.984853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:21.990 [2024-11-26 23:19:00.984859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:34:21.990 [2024-11-26 23:19:00.984865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.990 [2024-11-26 23:19:00.986651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.990 [2024-11-26 23:19:00.986675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:21.990 [2024-11-26 23:19:00.986684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:34:21.990 [2024-11-26 23:19:00.986691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.990 [2024-11-26 23:19:00.986782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.990 [2024-11-26 23:19:00.986789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:21.990 [2024-11-26 23:19:00.986796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:34:21.990 [2024-11-26 23:19:00.986806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.990 [2024-11-26 23:19:00.992445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.990 [2024-11-26 23:19:00.992469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:21.990 [2024-11-26 23:19:00.992477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:00.992483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:00.992524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:00.992531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:21.991 [2024-11-26 23:19:00.992537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:00.992542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:00.992578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:00.992589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:21.991 [2024-11-26 23:19:00.992595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:00.992601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:00.992614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:00.992621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:21.991 [2024-11-26 23:19:00.992627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:00.992633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.003174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.003211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:21.991 [2024-11-26 23:19:01.003220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.003226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.012119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:21.991 [2024-11-26 23:19:01.012128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.012135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.012188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:21.991 [2024-11-26 23:19:01.012198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.012204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.012237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:21.991 [2024-11-26 23:19:01.012244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.012250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.012313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:21.991 [2024-11-26 23:19:01.012326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.012334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.012364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:21.991 [2024-11-26 23:19:01.012370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.012376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.012419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:21.991 [2024-11-26 23:19:01.012428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.012436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.991 [2024-11-26 23:19:01.012486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:21.991 [2024-11-26 23:19:01.012492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.991 [2024-11-26 23:19:01.012499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.991 [2024-11-26 23:19:01.012607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.126 ms, result 0 00:34:22.932 00:34:22.932 00:34:22.932 23:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:22.932 [2024-11-26 23:19:01.790903] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:34:22.932 [2024-11-26 23:19:01.791020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99011 ] 00:34:22.932 [2024-11-26 23:19:01.923968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:22.932 [2024-11-26 23:19:01.954537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.932 [2024-11-26 23:19:01.997608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.195 [2024-11-26 23:19:02.147252] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:23.195 [2024-11-26 23:19:02.147380] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:23.195 [2024-11-26 23:19:02.311031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.311093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:23.195 [2024-11-26 23:19:02.311110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:23.195 [2024-11-26 23:19:02.311118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.311192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.311203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:23.195 [2024-11-26 23:19:02.311212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:23.195 [2024-11-26 23:19:02.311224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.311247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:23.195 [2024-11-26 23:19:02.311641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:23.195 [2024-11-26 23:19:02.311687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.311699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:23.195 [2024-11-26 23:19:02.311710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:34:23.195 [2024-11-26 23:19:02.311718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.312467] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:23.195 [2024-11-26 23:19:02.312545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.312557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:23.195 [2024-11-26 23:19:02.312573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:34:23.195 [2024-11-26 23:19:02.312586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.312659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.312671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:23.195 [2024-11-26 23:19:02.312681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:34:23.195 [2024-11-26 23:19:02.312688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.312993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.313013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:23.195 [2024-11-26 23:19:02.313022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:34:23.195 [2024-11-26 23:19:02.313034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.313133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.313157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:23.195 [2024-11-26 23:19:02.313169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:34:23.195 [2024-11-26 23:19:02.313180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.313214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.313223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:23.195 [2024-11-26 23:19:02.313232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:23.195 [2024-11-26 23:19:02.313240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.313266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:23.195 [2024-11-26 23:19:02.316011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.316051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:23.195 [2024-11-26 23:19:02.316062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:34:23.195 [2024-11-26 23:19:02.316070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.316105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.316114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:23.195 [2024-11-26 23:19:02.316130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:23.195 [2024-11-26 23:19:02.316138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.316207] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:23.195 [2024-11-26 23:19:02.316236] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:23.195 [2024-11-26 23:19:02.316278] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:23.195 [2024-11-26 23:19:02.316314] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:23.195 [2024-11-26 23:19:02.316426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:23.195 [2024-11-26 23:19:02.316438] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:23.195 [2024-11-26 23:19:02.316450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:23.195 [2024-11-26 23:19:02.316465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:23.195 [2024-11-26 23:19:02.316474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:23.195 [2024-11-26 23:19:02.316490] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:23.195 [2024-11-26 23:19:02.316498] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:23.195 [2024-11-26 23:19:02.316506] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:23.195 [2024-11-26 23:19:02.316513] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:23.195 [2024-11-26 23:19:02.316522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.316530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:23.195 [2024-11-26 23:19:02.316542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:34:23.195 [2024-11-26 23:19:02.316550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.316636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.195 [2024-11-26 23:19:02.316654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:23.195 [2024-11-26 23:19:02.316669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:23.195 [2024-11-26 23:19:02.316678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.195 [2024-11-26 23:19:02.316777] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:23.195 [2024-11-26 23:19:02.316798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:23.195 [2024-11-26 23:19:02.316814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:23.195 [2024-11-26 23:19:02.316823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.195 [2024-11-26 23:19:02.316832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:23.195 [2024-11-26 23:19:02.316840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:23.195 [2024-11-26 23:19:02.316849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:23.195 [2024-11-26 23:19:02.316857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:23.195 [2024-11-26 23:19:02.316874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:23.195 [2024-11-26 23:19:02.316881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:23.195 [2024-11-26 23:19:02.316892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:23.195 [2024-11-26 23:19:02.316900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:23.195 [2024-11-26 23:19:02.316908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:23.195 [2024-11-26 23:19:02.316917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:23.195 [2024-11-26 23:19:02.316926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:23.195 [2024-11-26 23:19:02.316934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.195 [2024-11-26 23:19:02.316943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:23.195 [2024-11-26 23:19:02.316950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:23.195 [2024-11-26 23:19:02.316958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.195 [2024-11-26 23:19:02.316978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:23.195 [2024-11-26 23:19:02.316986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:23.195 [2024-11-26 23:19:02.316995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.195 [2024-11-26 23:19:02.317003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:23.195 [2024-11-26 23:19:02.317012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:23.196 [2024-11-26 23:19:02.317023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.196 [2024-11-26 23:19:02.317031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:23.196 [2024-11-26 23:19:02.317043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:23.196 [2024-11-26 23:19:02.317051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.196 [2024-11-26 23:19:02.317060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:23.196 [2024-11-26 23:19:02.317068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:23.196 [2024-11-26 23:19:02.317075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.196 [2024-11-26 23:19:02.317083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:23.196 [2024-11-26 23:19:02.317091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:23.196 [2024-11-26 23:19:02.317099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:23.196 [2024-11-26 23:19:02.317107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:23.196 [2024-11-26 23:19:02.317115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:23.196 [2024-11-26 23:19:02.317123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:23.196 [2024-11-26 23:19:02.317131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:23.196 [2024-11-26 23:19:02.317138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:23.196 [2024-11-26 23:19:02.317145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.196 [2024-11-26 23:19:02.317152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:23.196 [2024-11-26 23:19:02.317159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:23.196 [2024-11-26 23:19:02.317168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.196 [2024-11-26 23:19:02.317175] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:23.196 [2024-11-26 23:19:02.317183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:23.196 [2024-11-26 23:19:02.317195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:23.196 [2024-11-26 23:19:02.317203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.196 [2024-11-26 23:19:02.317212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:23.196 [2024-11-26 23:19:02.317219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:23.196 [2024-11-26 23:19:02.317225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:23.196 [2024-11-26 23:19:02.317232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:23.196 [2024-11-26 23:19:02.317239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:23.196 [2024-11-26 23:19:02.317245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:23.196 [2024-11-26 23:19:02.317254] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:23.196 [2024-11-26 23:19:02.317264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:23.196 [2024-11-26 23:19:02.317274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:23.196 [2024-11-26 23:19:02.317282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:23.196 [2024-11-26 23:19:02.317290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:23.196 [2024-11-26 23:19:02.317323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:23.196 [2024-11-26 23:19:02.317333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:23.196 [2024-11-26 23:19:02.317341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:23.196 [2024-11-26 23:19:02.317349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:23.196 [2024-11-26 23:19:02.317356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:23.196 [2024-11-26 23:19:02.317364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:23.196 [2024-11-26 23:19:02.317371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:23.196 [2024-11-26 23:19:02.317379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:23.196 [2024-11-26 23:19:02.317386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:23.196 [2024-11-26 23:19:02.317394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:23.196 [2024-11-26 23:19:02.317401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:23.196 [2024-11-26 23:19:02.317409] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:23.196 [2024-11-26 23:19:02.317417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:23.196 [2024-11-26 23:19:02.317425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:23.196 [2024-11-26 23:19:02.317433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:23.196 [2024-11-26 23:19:02.317441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:23.196 [2024-11-26 23:19:02.317451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:23.196 [2024-11-26 23:19:02.317459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.196 [2024-11-26 23:19:02.317467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:23.196 [2024-11-26 23:19:02.317476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:34:23.196 [2024-11-26 23:19:02.317484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.331281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.331341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:23.459 [2024-11-26 23:19:02.331353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.748 ms 00:34:23.459 [2024-11-26 23:19:02.331362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.331447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.331456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:23.459 [2024-11-26 23:19:02.331466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:34:23.459 [2024-11-26 23:19:02.331474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.358463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.358531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:23.459 [2024-11-26 23:19:02.358548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.933 ms 00:34:23.459 [2024-11-26 23:19:02.358564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.358627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.358641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:23.459 [2024-11-26 23:19:02.358653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:23.459 [2024-11-26 23:19:02.358676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.358807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.358827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:23.459 [2024-11-26 23:19:02.358839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:23.459 [2024-11-26 23:19:02.358853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.359011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.359037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:23.459 [2024-11-26 23:19:02.359049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:34:23.459 [2024-11-26 23:19:02.359058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.370250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.370319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:23.459 [2024-11-26 23:19:02.370331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.168 ms 00:34:23.459 [2024-11-26 23:19:02.370339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.370487] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:23.459 [2024-11-26 23:19:02.370509] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:23.459 [2024-11-26 23:19:02.370520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.370529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:23.459 [2024-11-26 23:19:02.370542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:23.459 [2024-11-26 23:19:02.370550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.382853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.382893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:23.459 [2024-11-26 23:19:02.382906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.286 ms 00:34:23.459 [2024-11-26 23:19:02.382914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.383057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.383067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:23.459 [2024-11-26 23:19:02.383090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:34:23.459 [2024-11-26 23:19:02.383098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.383158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.383173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:23.459 [2024-11-26 23:19:02.383181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:23.459 [2024-11-26 23:19:02.383190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.383552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.459 [2024-11-26 23:19:02.383571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:23.459 [2024-11-26 23:19:02.383581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:34:23.459 [2024-11-26 23:19:02.383588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.459 [2024-11-26 23:19:02.383611] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:23.459 [2024-11-26 23:19:02.383622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.383639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:23.460 [2024-11-26 23:19:02.383651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:23.460 [2024-11-26 23:19:02.383659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.394403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:23.460 [2024-11-26 23:19:02.394564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.394575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:23.460 [2024-11-26 23:19:02.394586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.885 ms 00:34:23.460 [2024-11-26 23:19:02.394598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.397258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.397305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:23.460 [2024-11-26 23:19:02.397316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:34:23.460 [2024-11-26 23:19:02.397323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.397403] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:23.460 [2024-11-26 23:19:02.398033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.398056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:23.460 [2024-11-26 23:19:02.398069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:34:23.460 [2024-11-26 23:19:02.398081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.398114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.398127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:23.460 [2024-11-26 23:19:02.398136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:23.460 [2024-11-26 23:19:02.398143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.398182] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:23.460 [2024-11-26 23:19:02.398192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.398200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:23.460 [2024-11-26 23:19:02.398213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:23.460 [2024-11-26 23:19:02.398230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.405330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.405375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:23.460 [2024-11-26 23:19:02.405387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.081 ms 00:34:23.460 [2024-11-26 23:19:02.405395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.405488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.460 [2024-11-26 23:19:02.405499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:23.460 [2024-11-26 23:19:02.405508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:34:23.460 [2024-11-26 23:19:02.405520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.460 [2024-11-26 23:19:02.407225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.692 ms, result 0 00:34:24.849  [2024-11-26T23:19:04.918Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-26T23:19:05.862Z] Copying: 32/1024 [MB] (19 MBps) [2024-11-26T23:19:06.803Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-26T23:19:07.747Z] Copying: 60/1024 [MB] (10 MBps) [2024-11-26T23:19:08.690Z] Copying: 72/1024 [MB] (11 MBps) [2024-11-26T23:19:09.631Z] Copying: 87/1024 [MB] (15 MBps) [2024-11-26T23:19:11.069Z] Copying: 109/1024 [MB] (21 MBps) [2024-11-26T23:19:11.641Z] Copying: 132/1024 [MB] (23 MBps) [2024-11-26T23:19:12.668Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-26T23:19:14.052Z] Copying: 158/1024 [MB] (15 MBps) [2024-11-26T23:19:14.632Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-26T23:19:16.016Z] Copying: 180/1024 [MB] (10 MBps) [2024-11-26T23:19:16.959Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-26T23:19:17.901Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-26T23:19:18.844Z] Copying: 212/1024 [MB] (10 MBps) [2024-11-26T23:19:19.785Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-26T23:19:20.729Z] Copying: 237/1024 [MB] (13 MBps) [2024-11-26T23:19:21.676Z] Copying: 250/1024 [MB] (12 MBps) [2024-11-26T23:19:22.623Z] Copying: 262/1024 [MB] (11 MBps) [2024-11-26T23:19:24.018Z] Copying: 280/1024 [MB] (18 MBps) [2024-11-26T23:19:24.962Z] Copying: 298/1024 [MB] (17 MBps) [2024-11-26T23:19:25.907Z] Copying: 318/1024 [MB] (19 MBps) [2024-11-26T23:19:26.852Z] Copying: 339/1024 [MB] (21 MBps) [2024-11-26T23:19:27.801Z] Copying: 360/1024 [MB] (20 MBps) [2024-11-26T23:19:28.746Z] Copying: 380/1024 [MB] (20 MBps) [2024-11-26T23:19:29.696Z] Copying: 396/1024 [MB] (16 MBps) [2024-11-26T23:19:30.645Z] Copying: 407/1024 [MB] (11 MBps) [2024-11-26T23:19:32.034Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-26T23:19:32.982Z] Copying: 433/1024 [MB] (14 MBps) [2024-11-26T23:19:33.929Z] Copying: 449/1024 [MB] (16 MBps) [2024-11-26T23:19:34.873Z] Copying: 461/1024 [MB] (12 MBps) [2024-11-26T23:19:35.817Z] Copying: 477/1024 [MB] (15 MBps) [2024-11-26T23:19:36.767Z] Copying: 496/1024 [MB] (18 MBps) [2024-11-26T23:19:37.724Z] Copying: 506/1024 [MB] (10 MBps) [2024-11-26T23:19:38.668Z] Copying: 517/1024 [MB] (11 MBps) [2024-11-26T23:19:39.618Z] Copying: 529/1024 [MB] (11 MBps) [2024-11-26T23:19:41.006Z] Copying: 545/1024 [MB] (16 MBps) [2024-11-26T23:19:41.660Z] Copying: 556/1024 [MB] (10 MBps) [2024-11-26T23:19:43.049Z] Copying: 567/1024 [MB] (11 MBps) [2024-11-26T23:19:43.621Z] Copying: 578/1024 [MB] (10 MBps) [2024-11-26T23:19:45.010Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-26T23:19:45.955Z] Copying: 599/1024 [MB] (11 MBps) [2024-11-26T23:19:46.893Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-26T23:19:47.838Z] Copying: 622/1024 [MB] (12 MBps) [2024-11-26T23:19:48.783Z] Copying: 633/1024 [MB] (10 MBps) [2024-11-26T23:19:49.727Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-26T23:19:50.671Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-26T23:19:51.616Z] Copying: 668/1024 [MB] (11 MBps) [2024-11-26T23:19:53.004Z] Copying: 680/1024 [MB] (11 MBps) [2024-11-26T23:19:53.949Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-26T23:19:54.893Z] Copying: 701/1024 [MB] (10 MBps) [2024-11-26T23:19:55.838Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-26T23:19:56.782Z] Copying: 733/1024 [MB] (21 MBps) [2024-11-26T23:19:57.730Z] Copying: 752/1024 [MB] (19 MBps) [2024-11-26T23:19:58.673Z] Copying: 772/1024 [MB] (19 MBps) [2024-11-26T23:19:59.618Z] Copying: 789/1024 [MB] (16 MBps) [2024-11-26T23:20:01.002Z] Copying: 806/1024 [MB] (16 MBps) [2024-11-26T23:20:01.944Z] Copying: 820/1024 [MB] (14 MBps) [2024-11-26T23:20:02.885Z] Copying: 832/1024 [MB] (11 MBps) [2024-11-26T23:20:03.829Z] Copying: 848/1024 [MB] (15 MBps) [2024-11-26T23:20:04.776Z] Copying: 863/1024 [MB] (15 MBps) [2024-11-26T23:20:05.719Z] Copying: 882/1024 [MB] (18 MBps) [2024-11-26T23:20:06.660Z] Copying: 897/1024 [MB] (15 MBps) [2024-11-26T23:20:08.218Z] Copying: 916/1024 [MB] (19 MBps) [2024-11-26T23:20:08.815Z] Copying: 935/1024 [MB] (18 MBps) [2024-11-26T23:20:09.771Z] Copying: 949/1024 [MB] (13 MBps) [2024-11-26T23:20:10.716Z] Copying: 969/1024 [MB] (20 MBps) [2024-11-26T23:20:11.661Z] Copying: 991/1024 [MB] (21 MBps) [2024-11-26T23:20:12.624Z] Copying: 1001/1024 [MB] (10 MBps) [2024-11-26T23:20:14.034Z] Copying: 1012/1024 [MB] (10 MBps) [2024-11-26T23:20:14.034Z] Copying: 1022/1024 [MB] (10 MBps) [2024-11-26T23:20:14.034Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-26 23:20:13.913752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.907 [2024-11-26 23:20:13.913873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:34.907 [2024-11-26 23:20:13.913897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:34.907 [2024-11-26 23:20:13.913908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.907 [2024-11-26 23:20:13.913940] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:34.907 [2024-11-26 23:20:13.914986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.907 [2024-11-26 23:20:13.915033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:34.907 [2024-11-26 23:20:13.915056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:35:34.907 [2024-11-26 23:20:13.915068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.907 [2024-11-26 23:20:13.915410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.907 [2024-11-26 23:20:13.915425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:34.907 [2024-11-26 23:20:13.915437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:35:34.907 [2024-11-26 23:20:13.915447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.907 [2024-11-26 23:20:13.915488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.907 [2024-11-26 23:20:13.915501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:34.907 [2024-11-26 23:20:13.915518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:34.907 [2024-11-26 23:20:13.915529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.907 [2024-11-26 23:20:13.915614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.907 [2024-11-26 23:20:13.915627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:34.907 [2024-11-26 23:20:13.915638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:34.907 [2024-11-26 23:20:13.915649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.907 [2024-11-26 23:20:13.915666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:34.907 [2024-11-26 23:20:13.915689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:34.907 [2024-11-26 23:20:13.915701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.915999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:34.907 [2024-11-26 23:20:13.916336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:34.908 [2024-11-26 23:20:13.916715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:34.908 [2024-11-26 23:20:13.916727] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7086a201-ebb6-43da-bca8-1ef81f58afda 00:35:34.908 [2024-11-26 23:20:13.916738] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:34.908 [2024-11-26 23:20:13.916749] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5920 00:35:34.908 [2024-11-26 23:20:13.916760] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5888 00:35:34.908 [2024-11-26 23:20:13.916774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0054 00:35:34.908 [2024-11-26 23:20:13.916783] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:34.908 [2024-11-26 23:20:13.916792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:34.908 [2024-11-26 23:20:13.916802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:34.908 [2024-11-26 23:20:13.916810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:34.908 [2024-11-26 23:20:13.916818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:34.908 [2024-11-26 23:20:13.916827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.908 [2024-11-26 23:20:13.916837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:34.908 [2024-11-26 23:20:13.916849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:35:34.908 [2024-11-26 23:20:13.916859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.920128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.908 [2024-11-26 23:20:13.920180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:34.908 [2024-11-26 23:20:13.920192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:35:34.908 [2024-11-26 23:20:13.920201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.920388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.908 [2024-11-26 23:20:13.920400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:34.908 [2024-11-26 23:20:13.920409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:35:34.908 [2024-11-26 23:20:13.920418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.931052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.931105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:34.908 [2024-11-26 23:20:13.931117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.931126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.931208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.931217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:34.908 [2024-11-26 23:20:13.931227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.931237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.931347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.931361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:34.908 [2024-11-26 23:20:13.931371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.931379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.931398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.931407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:34.908 [2024-11-26 23:20:13.931416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.931424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.950751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.950814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:34.908 [2024-11-26 23:20:13.950827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.950836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.967100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:34.908 [2024-11-26 23:20:13.967114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.967123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.967212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:34.908 [2024-11-26 23:20:13.967223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.967232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.967284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:34.908 [2024-11-26 23:20:13.967312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.967321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.967397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:34.908 [2024-11-26 23:20:13.967419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.967428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.967465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:34.908 [2024-11-26 23:20:13.967473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.967481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.967551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:34.908 [2024-11-26 23:20:13.967563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.967572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:34.908 [2024-11-26 23:20:13.967636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:34.908 [2024-11-26 23:20:13.967645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:34.908 [2024-11-26 23:20:13.967654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.908 [2024-11-26 23:20:13.967826] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.034 ms, result 0 00:35:35.169 00:35:35.169 00:35:35.169 23:20:14 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:37.711 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:37.711 Process with pid 96649 is not found 00:35:37.711 Remove shared memory files 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96649 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96649 ']' 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96649 00:35:37.711 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96649) - No such process 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 96649 is not found' 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_band_md /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_l2p_l1 /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_l2p_l2 /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_l2p_l2_ctx /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_nvc_md /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_p2l_pool /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_sb /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_sb_shm /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_trim_bitmap /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_trim_log /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_trim_md /dev/hugepages/ftl_7086a201-ebb6-43da-bca8-1ef81f58afda_vmap 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:37.711 00:35:37.711 real 5m8.351s 00:35:37.711 user 4m55.837s 00:35:37.711 sys 0m12.092s 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.711 ************************************ 00:35:37.711 END TEST ftl_restore_fast 00:35:37.711 ************************************ 00:35:37.711 23:20:16 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:37.711 23:20:16 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:37.712 23:20:16 ftl -- ftl/ftl.sh@14 -- # killprocess 87850 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@954 -- # '[' -z 87850 ']' 00:35:37.712 Process with pid 87850 is not found 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@958 -- # kill -0 87850 00:35:37.712 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87850) - No such process 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87850 is not found' 00:35:37.712 23:20:16 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:37.712 23:20:16 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=99780 00:35:37.712 23:20:16 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:37.712 23:20:16 ftl -- ftl/ftl.sh@20 -- # waitforlisten 99780 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@835 -- # '[' -z 99780 ']' 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.712 23:20:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:37.973 [2024-11-26 23:20:16.896654] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc3 initialization... 00:35:37.973 [2024-11-26 23:20:16.896830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99780 ] 00:35:37.973 [2024-11-26 23:20:17.038932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:37.973 [2024-11-26 23:20:17.069624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.234 [2024-11-26 23:20:17.110193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.804 23:20:17 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:38.804 23:20:17 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:38.804 23:20:17 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:39.064 nvme0n1 00:35:39.064 23:20:18 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:39.064 23:20:18 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:39.064 23:20:18 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:39.326 23:20:18 ftl -- ftl/common.sh@28 -- # stores=243a5218-39d1-4fc1-9890-ba9c000ee999 00:35:39.326 23:20:18 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:39.326 23:20:18 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 243a5218-39d1-4fc1-9890-ba9c000ee999 00:35:39.587 23:20:18 ftl -- ftl/ftl.sh@23 -- # killprocess 99780 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@954 -- # '[' -z 99780 ']' 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@958 -- # kill -0 99780 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@959 -- # uname 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99780 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99780' 00:35:39.587 killing process with pid 99780 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@973 -- # kill 99780 00:35:39.587 23:20:18 ftl -- common/autotest_common.sh@978 -- # wait 99780 00:35:40.162 23:20:19 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:40.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:40.423 Waiting for block devices as requested 00:35:40.423 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:40.423 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:40.423 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:40.683 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:45.976 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:45.976 Remove shared memory files 00:35:45.976 23:20:24 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:45.976 23:20:24 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:45.976 23:20:24 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:45.976 23:20:24 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:45.976 23:20:24 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:45.976 23:20:24 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:45.976 23:20:24 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:45.976 00:35:45.976 real 18m4.056s 00:35:45.976 user 20m15.734s 00:35:45.976 sys 1m26.664s 00:35:45.976 ************************************ 00:35:45.976 END TEST ftl 00:35:45.976 ************************************ 00:35:45.976 23:20:24 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.976 23:20:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:45.976 23:20:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:45.976 23:20:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:45.976 23:20:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:45.976 23:20:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:45.976 23:20:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:45.976 23:20:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:45.976 23:20:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:45.976 23:20:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:45.976 23:20:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:45.976 23:20:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:45.976 23:20:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.976 23:20:24 -- common/autotest_common.sh@10 -- # set +x 00:35:45.976 23:20:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:45.976 23:20:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:45.976 23:20:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:45.976 23:20:24 -- common/autotest_common.sh@10 -- # set +x 00:35:47.364 INFO: APP EXITING 00:35:47.364 INFO: killing all VMs 00:35:47.364 INFO: killing vhost app 00:35:47.364 INFO: EXIT DONE 00:35:47.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:47.936 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:47.937 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:47.937 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:47.937 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:48.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:48.780 Cleaning 00:35:48.780 Removing: /var/run/dpdk/spdk0/config 00:35:48.780 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:48.780 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:48.780 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:48.780 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:48.780 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:48.780 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:48.780 Removing: /var/run/dpdk/spdk0 00:35:48.780 Removing: /var/run/dpdk/spdk_pid70780 00:35:48.780 Removing: /var/run/dpdk/spdk_pid70944 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71145 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71227 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71261 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71367 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71385 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71568 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71641 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71721 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71815 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71896 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71935 00:35:48.780 Removing: /var/run/dpdk/spdk_pid71966 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72037 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72115 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72535 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72581 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72629 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72645 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72703 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72719 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72777 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72793 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72835 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72853 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72895 00:35:48.780 Removing: /var/run/dpdk/spdk_pid72913 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73040 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73077 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73160 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73321 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73394 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73425 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73840 00:35:48.780 Removing: /var/run/dpdk/spdk_pid73934 00:35:48.780 Removing: /var/run/dpdk/spdk_pid74042 00:35:48.780 Removing: /var/run/dpdk/spdk_pid74080 00:35:48.780 Removing: /var/run/dpdk/spdk_pid74105 00:35:48.780 Removing: /var/run/dpdk/spdk_pid74178 00:35:48.780 Removing: /var/run/dpdk/spdk_pid74792 00:35:48.780 Removing: /var/run/dpdk/spdk_pid74823 00:35:48.780 Removing: /var/run/dpdk/spdk_pid75273 00:35:48.780 Removing: /var/run/dpdk/spdk_pid75372 00:35:48.780 Removing: /var/run/dpdk/spdk_pid75481 00:35:48.780 Removing: /var/run/dpdk/spdk_pid75522 00:35:48.780 Removing: /var/run/dpdk/spdk_pid75543 00:35:48.780 Removing: /var/run/dpdk/spdk_pid75563 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77409 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77524 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77539 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77551 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77591 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77595 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77607 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77652 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77656 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77668 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77718 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77722 00:35:48.780 Removing: /var/run/dpdk/spdk_pid77734 00:35:48.780 Removing: /var/run/dpdk/spdk_pid79118 00:35:48.780 Removing: /var/run/dpdk/spdk_pid79207 00:35:48.780 Removing: /var/run/dpdk/spdk_pid80605 00:35:48.780 Removing: /var/run/dpdk/spdk_pid82344 00:35:48.780 Removing: /var/run/dpdk/spdk_pid82396 00:35:48.780 Removing: /var/run/dpdk/spdk_pid82465 00:35:48.780 Removing: /var/run/dpdk/spdk_pid82565 00:35:48.780 Removing: /var/run/dpdk/spdk_pid82646 00:35:49.043 Removing: /var/run/dpdk/spdk_pid82736 00:35:49.043 Removing: /var/run/dpdk/spdk_pid82788 00:35:49.043 Removing: /var/run/dpdk/spdk_pid82858 00:35:49.043 Removing: /var/run/dpdk/spdk_pid82956 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83037 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83127 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83185 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83249 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83344 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83430 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83519 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83575 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83642 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83735 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83821 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83906 00:35:49.043 Removing: /var/run/dpdk/spdk_pid83971 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84034 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84097 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84166 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84264 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84349 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84433 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84490 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84553 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84622 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84685 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84783 00:35:49.043 Removing: /var/run/dpdk/spdk_pid84868 00:35:49.043 Removing: /var/run/dpdk/spdk_pid85001 00:35:49.043 Removing: /var/run/dpdk/spdk_pid85268 00:35:49.043 Removing: /var/run/dpdk/spdk_pid85298 00:35:49.043 Removing: /var/run/dpdk/spdk_pid85736 00:35:49.043 Removing: /var/run/dpdk/spdk_pid85918 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86009 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86102 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86139 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86164 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86449 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86489 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86540 00:35:49.043 Removing: /var/run/dpdk/spdk_pid86910 00:35:49.043 Removing: /var/run/dpdk/spdk_pid87048 00:35:49.043 Removing: /var/run/dpdk/spdk_pid87850 00:35:49.043 Removing: /var/run/dpdk/spdk_pid87966 00:35:49.043 Removing: /var/run/dpdk/spdk_pid88132 00:35:49.043 Removing: /var/run/dpdk/spdk_pid88225 00:35:49.043 Removing: /var/run/dpdk/spdk_pid88582 00:35:49.043 Removing: /var/run/dpdk/spdk_pid88852 00:35:49.043 Removing: /var/run/dpdk/spdk_pid89199 00:35:49.043 Removing: /var/run/dpdk/spdk_pid89371 00:35:49.043 Removing: /var/run/dpdk/spdk_pid89573 00:35:49.043 Removing: /var/run/dpdk/spdk_pid89609 00:35:49.043 Removing: /var/run/dpdk/spdk_pid89824 00:35:49.043 Removing: /var/run/dpdk/spdk_pid89848 00:35:49.043 Removing: /var/run/dpdk/spdk_pid89885 00:35:49.043 Removing: /var/run/dpdk/spdk_pid90167 00:35:49.043 Removing: /var/run/dpdk/spdk_pid90370 00:35:49.043 Removing: /var/run/dpdk/spdk_pid90959 00:35:49.043 Removing: /var/run/dpdk/spdk_pid91597 00:35:49.043 Removing: /var/run/dpdk/spdk_pid92164 00:35:49.043 Removing: /var/run/dpdk/spdk_pid92923 00:35:49.043 Removing: /var/run/dpdk/spdk_pid93061 00:35:49.043 Removing: /var/run/dpdk/spdk_pid93137 00:35:49.043 Removing: /var/run/dpdk/spdk_pid93712 00:35:49.043 Removing: /var/run/dpdk/spdk_pid93772 00:35:49.043 Removing: /var/run/dpdk/spdk_pid94319 00:35:49.043 Removing: /var/run/dpdk/spdk_pid94783 00:35:49.043 Removing: /var/run/dpdk/spdk_pid95689 00:35:49.043 Removing: /var/run/dpdk/spdk_pid95816 00:35:49.043 Removing: /var/run/dpdk/spdk_pid95847 00:35:49.043 Removing: /var/run/dpdk/spdk_pid95905 00:35:49.043 Removing: /var/run/dpdk/spdk_pid95950 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96003 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96184 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96264 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96320 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96370 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96405 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96498 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96649 00:35:49.043 Removing: /var/run/dpdk/spdk_pid96858 00:35:49.043 Removing: /var/run/dpdk/spdk_pid97697 00:35:49.043 Removing: /var/run/dpdk/spdk_pid98419 00:35:49.043 Removing: /var/run/dpdk/spdk_pid99011 00:35:49.043 Removing: /var/run/dpdk/spdk_pid99780 00:35:49.043 Clean 00:35:49.305 23:20:28 -- common/autotest_common.sh@1453 -- # return 0 00:35:49.305 23:20:28 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:49.305 23:20:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:49.305 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:35:49.305 23:20:28 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:49.305 23:20:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:49.305 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:35:49.305 23:20:28 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:49.305 23:20:28 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:49.305 23:20:28 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:49.305 23:20:28 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:49.305 23:20:28 -- spdk/autotest.sh@398 -- # hostname 00:35:49.305 23:20:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:49.567 geninfo: WARNING: invalid characters removed from testname! 00:36:16.173 23:20:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:18.087 23:20:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:20.639 23:20:59 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:22.543 23:21:01 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:23.918 23:21:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:25.841 23:21:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:27.754 23:21:06 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:27.754 23:21:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:27.754 23:21:06 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:27.754 23:21:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:27.754 23:21:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:27.754 23:21:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:27.754 + [[ -n 5764 ]] 00:36:27.754 + sudo kill 5764 00:36:27.765 [Pipeline] } 00:36:27.782 [Pipeline] // timeout 00:36:27.787 [Pipeline] } 00:36:27.805 [Pipeline] // stage 00:36:27.810 [Pipeline] } 00:36:27.850 [Pipeline] // catchError 00:36:27.881 [Pipeline] stage 00:36:27.883 [Pipeline] { (Stop VM) 00:36:27.890 [Pipeline] sh 00:36:28.166 + vagrant halt 00:36:30.711 ==> default: Halting domain... 00:36:36.108 [Pipeline] sh 00:36:36.397 + vagrant destroy -f 00:36:38.949 ==> default: Removing domain... 00:36:39.903 [Pipeline] sh 00:36:40.185 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:40.197 [Pipeline] } 00:36:40.212 [Pipeline] // stage 00:36:40.218 [Pipeline] } 00:36:40.245 [Pipeline] // dir 00:36:40.266 [Pipeline] } 00:36:40.287 [Pipeline] // wrap 00:36:40.291 [Pipeline] } 00:36:40.298 [Pipeline] // catchError 00:36:40.304 [Pipeline] stage 00:36:40.305 [Pipeline] { (Epilogue) 00:36:40.312 [Pipeline] sh 00:36:40.593 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:45.908 [Pipeline] catchError 00:36:45.910 [Pipeline] { 00:36:45.922 [Pipeline] sh 00:36:46.209 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:46.209 Artifacts sizes are good 00:36:46.222 [Pipeline] } 00:36:46.236 [Pipeline] // catchError 00:36:46.250 [Pipeline] archiveArtifacts 00:36:46.260 Archiving artifacts 00:36:46.369 [Pipeline] cleanWs 00:36:46.384 [WS-CLEANUP] Deleting project workspace... 00:36:46.384 [WS-CLEANUP] Deferred wipeout is used... 00:36:46.391 [WS-CLEANUP] done 00:36:46.393 [Pipeline] } 00:36:46.410 [Pipeline] // stage 00:36:46.417 [Pipeline] } 00:36:46.434 [Pipeline] // node 00:36:46.440 [Pipeline] End of Pipeline 00:36:46.493 Finished: SUCCESS